Generalized Evolutionary Point Processes: Model Specifications and Model Comparison

10/15/2019 ∙ by Philip A. White, et al. ∙ 0

Generalized evolutionary point processes offer a class of point process models that allows for either excitation or inhibition based upon the history of the process. In this regard, we propose modeling which comprises generalization of the nonlinear Hawkes process. Working within a Bayesian framework, model fitting is implemented through Markov chain Monte Carlo. This entails discussion of computation of the likelihood for such point patterns. Furthermore, for this class of models, we discuss strategies for model comparison. Using simulation, we illustrate how well we can distinguish these models from point pattern specifications with conditionally independent event times, e.g., Poisson processes. Specifically, we demonstrate that these models can correctly identify true relationships (i.e., excitation or inhibition/control). Then, we consider a novel extension of the log Gaussian Cox process that incorporates evolutionary behavior and illustrate that our model comparison approach prefers the evolutionary log Gaussian Cox process compared to simpler models. We also examine a real dataset consisting of violent crime events from the 11th police district in Chicago from the year 2018. This data exhibits strong daily seasonality and changes across the year. After we account for these data attributes, we find significant but mild self-excitation, implying that event occurrence increases the intensity of future events.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Self-exciting point process models, in particular the Hawkes process, are a class of point processes that capture excitatory relationships among observed events (Hawkes, 1971a, b). More precisely, the occurrence of an event in the window is encouraged by the occurrence of events recent to time . The Hawkes process has come to prominence for modeling earthquakes (see Ogata, 1988, 1998; Fox et al., 2016), crime events (see, e.g., Mohler et al., 2011), financial trading (see Embrechts et al., 2011), and neurophysiology (see Chornoboy et al., 1988, for an early discussion)

. More recently, these models have been adopted and extended in the machine learning literature to quantify the interactions between events on network structures

(see, e.g., Blundell et al., 2012; Farajtabar et al., 2017; Chen and Tan, 2018).

In many settings, strict self-excitation is too limiting. With a model that allows for both excitation and no excitation, Linderman and Adams (2014)

argue for a spike-and-slab estimation approach for inferring about excitation parameters. However, the presence or absence of excitation does not allow for identifying the presence of inhibitory relationships between events.

Some authors have lifted the constraint of strict excitation but have failed to insist that the intensity be nonnegative (see, e.g., Pillow et al., 2008). In neurophysiology, Hansen et al. (2015); Mei and Eisner (2017); Chen et al. (2017)

agree that strict excitation is too limiting because inhibition is an important relational component in neuronal networks. In doing so, they offer a variety of coherent remedies.

Hansen et al. (2015) propose a simple rectifier function around the intensity to force the intensity to be non-negative. We refer to this as a Tobit link (Tobin, 1958) specification. Chen et al. (2017) propose constrained versions of the Hawkes process that allow for inhibitory relationships in a theoretically coherent way. However, their approach requires a complicated iterative construction procedure. Mei and Eisner (2017) argue for an intuitive approach that uses a scaled softplus function, e.g., , as a link function for an unconstrained intensity, in order to restrict the intensity to be positive. We generalize the approach of Mei and Eisner (2017) to consider more general link functions.

Our contributions are as follows. First, we propose straightforward and intuitive evolutionary point process models without aggregating to arbitrary time windows. For this, we generalize nonlinear Hawkes processes to also include dampening relationships. Next, in this context, we discuss model fitting in a fully Bayesian framework that allows for full posterior inference regarding parameters along with associated uncertainty. Further, we propose model comparison criteria appropriate for evolutionary point processes. Then, we employ a simulation study to illuminate when and how well we can identify departure from conditionally independent event times in favor of excitation or inhibition. As a part of this, we introduce a novel extension of the log Gaussian Cox process that includes an evolutionary component. Lastly, we consider a real dataset and, using this evolutionary log Gaussian Cox process, reveal the presence of mild self-excitation with regard to event times.

We define an evolutionary point process as a point process model whose specification relies on previously observed points, i.e., is specified through a conditional intensity; we borrow this terminology from Rasmussen (2018). Within this modeling framework, we propose useful metrics for model comparison. Previous approaches for comparing evolutionary point process models, in particular, Hawkes process models, include using the Akaike information criterion (AIC) (Ogata, 1988) or employing simulations from the model (Chen and Tan, 2018) using mean squared error. Strategies we eschew are the use of the random time change theorem (see, e.g., Meyer, 1971; Ogata, 1988) and Pearson-like residuals (see, e.g., Bray and Schoenberg, 2013). We avoid these approaches because they first require estimation of the compensator or the intensity before comparing the data to model predictions. Hence, they use the data twice, weakening model criticism. We also note the hypothesis testing approach of Dachian and Kutoyants (2008)

. They consider formal locally most powerful testing for departure from a Poisson process in the direction of self-exciting behavior. We work within a Bayesian framework, with explicit classes of parametric models, and with model comparison in the data space so our perspective is not directly comparable with theirs.

The format of the paper is as follows. We introduce evolutionary modeling in the context of current modeling approaches in Section 2. We discuss generalization of the nonlinear Hawkes process that permits both excitatory and inhibitory relationships in Section 3. In response to unique challenges associated with evolutionary point processes, we propose model comparison approaches in Section 4. In Section 5, we provide simulation studies that explore what aspects of the model we can effectively discriminate using model comparison. Then, in Section 6, we provide an analysis of violent crime events in the 11th Police District in Chicago, Illinois, USA in the year 2018 where we find mild self-excitation of event times, even accounting for other data characteristics. We conclude with a summary and possible extensions of our modeling framework in Section 7.

2 Evolutionary Point Processes

For a realization of a point process, we denote as the counting measure over , where, for any Borel set , and 1 is the indicator function. We define an evolutionary point process through a conditional intensity, that is, its intensity at time depends upon the history of the process up to time . Let to be an observed point pattern of events, where is an event time. We let the history of the point pattern up to time be .

The conditional intensity, given the history up to time , is denoted by and captures the (instantaneous) expected number of points per unit time. These types of point processes allow the intensity to depend directly upon event occurrence, enabling either excitation or dampening of the intensity. Here, we provide a few examples from each class but do not attempt an exhaustive review.

The compensator asssociated with a point process is customarily defined as . Applied to an evolutionary point process, we denote it by with

(1)

The compensator is a cumulative intensity and gives the expected number of points up to time . For most point process models, the intensity has unknown parameters, which we denote as and, accordingly, we write , or, for a conditional intensity, .

Turning to self-exciting point process models, the Hawkes process (Hawkes, 1971b) is the most common. For a point pattern with events , the conditional intensity of the Hawkes process is written as

(2)

where is a parametric specification for the background intensity and is the parametric excitation intensity. So, altogether, .

In most work the background intensity is assumed to be constant, ignoring potentially relevant covariate information. Simma and Jordan (2012) propose using using covariate information in both and . This is a potentially important modeling aspect but it is beyond the scope of this manuscript.

Some possible choices of are given in Table 1. As was done by Rasmussen (2013), we decompose the excitation (or triggering or offspring) function into a density () times a multiplicative factor (). That is, . In the examples given in Table 1, serves as a scaling factor to the normalized offspring intensity . Though some applications might encourage alternative choices for , in the sequel we confine ourselves to the exponential.

Function Parameters
Exponential ,
Uniform , ,
Gaussian ,
Triangle , ,
Gamma , ,
Table 1: Possible triggering functions , where represents the time difference between two points.

The assumption of additive excitation, as in (2) may not be desirable or justified by the application. In this case, the nonlinear Hawkes process can be used (see Brémaud and Massoulié, 1996; Zhu, 2013, for fuller discussion on nonlinear Hawkes processes). For a monotonic function , the nonlinear Hawkes process has a conditional intensity function,

(3)

where the components of the intensity are analogous to those in (2), usually with the same constraints. This is still a strictly self-exciting point process model; merely modulates the conditional intensity across . Choices for are considered in the next section.

Self-controlling (or self-regulating) point processes (Isham and Westcott, 1979) can be viewed as an extension of the nonlinear Hawkes processes in (3). These processes, unlike self-exciting point processes, are applied in settings where the occurrence of events up to time are assumed to decrease the incidence of occurrence of events in the interval . A model of this type has been used, for instance, in modeling queues (Daley and Vere-Jones, 2003). An example of a suitable intensity is

(4)

where , and is the counting measure. In this case, and compete, increasing and decreasing the intensity. Ostensibly, this model could be adapted to capture opposing phenomena, self-activation, where decreases with until events excite the intensity.

The likelihood for realization under a general conditional intensity, with associated compensator, becomes

(5)

which is then used for inference about an evolutionary point pattern. Derivation of this likelihood is provided in Appendix A.

3 General Evolutionary Point Process Models

To specify general evolutionary point process models, we adopt versions of the nonlinear Hawkes process (3), allowing the triggering function to be both positive and negative and also using a monotonic function that maps the unconstrained intensity to . That is, for , the conditional intensity becomes

(6)

Selections for presented in Table 1 are augmented so that we can now allow . To illustrate the flexibility of this model class, Figure 1 shows realizations of for several different choices which are detailed in discussion below.

As discussed in Section 1, models of this type have been proposed (viz., or the scaled softplus function); however, other selections of may also be appropriate. We provide several examples of in Table 2. Within this framework, modeling decisions focus on selection of and . The key issue concerns the interplay between and . Can we distinguish ’s for a given ? Can we distinguish ’s for a given ? This is the intent of our simulation study in the next section. It is anticipated that a large number of event times will be needed to distinguish models. Moreover, perhaps the more important objective is to be able to distinguish a nonlinear Poisson process, i.e., a process with from an exciting or from an inhibiting process. So, this becomes the emphasis of our simulation investigation. In the absence of covariates, a nonlinear homogeneous Poisson process is just a reparametrized homogeneous Poisson process.

We offer a brief technical paragraph. For a model with , Brémaud and Massoulié (1996) shows that non-linear Hawkes processes are stable only if the link function is -Lipschitz. Here, a stable model is one that, when simulated from, will not generate arbitrarily large point patterns. Neural spike trains are often modeled as a generalized linear model using , where the model is strictly excitatory, giving an extension of the homogeneous or nonhomogeneous Poisson process. However, these models are not stable because is not -Lipschitz (see Gerhard et al., 2017, for a discussion about the instability of this model). For some remedies for the instability of these models, see Chen et al. (2019); however, this is not our focus. We note that no such issues arise for inhibitory models using . When gives self-excitation, the power link in Table 2 yields a stable point process if . When , the process is not -Lipschitz and thus not stable. The remaining choices in Table 2 are -Lipschitz.

Function
Identity
Power Tobit/rectifier
Soft-plus
Soft-plus
Exponential
Table 2: Possible link functions for the generalized evolutionary point processes.

To visualize the flexibility of the models presented here, we plot several examples of intensities that arise from (6) by changing and . Returning to Figure 1, in the left panel we plot intensities with with the same set of observed events, where we have , as an exponential with , and . In the right panel of Figure 1 we plot the intensity for three point processes that differ in terms of their link function , the exponential link and the power link with and . The exponential link shows greater self-excitement than the power link with . Because of the shape of the square-root function, the power link with shows the more excitation than the power link with when but less excitation when . Moreover, the duration of the excitation depends on . While excitation is most extreme with , it also decays most rapidly. On the other hand, excitation decays more slowly for the power link with . These plots illustrate the flexibility of these models; however, as we discuss in Section 5, it is difficult to discriminate among these link functions from a single point pattern.

Figure 1: (Left) Self-exciting intensities with different s for a Tobit . Events, indicated by a gray line, are at times 2, 7, 7.5, 8, 8.5, 9. (Right) Self-exciting intensities with different . Events, indicated by a gray line, are at times 1, 1.2, 1.4, 1.6, 1.8, 2, 7, 8).

3.1 Prior Distributions and Model Fitting

To evaluate the likelihood (5), we must compute . This expression is not generally analytically available because of the function . Therefore, the integral is estimated numerically. Unlike the Hawkes process, there is no representation of this process as a cluster process (see, e.g., Hawkes and Oakes, 1974; Rasmussen, 2013). However, there are other computational methods that can expedite likelihood computation. For example, with the exponential excitation function, the likelihood can be written using a recursion (Ogata, 1978) that enables likelihood computations of order . Mei and Eisner (2017) use a Monte Carlo approximation for . However, since this integral is only one-dimensional, we suggest that a simple numerical integration over a fine grid is faster and a better approximation than the Monte Carlo integral. We use the trapezoidal rule for numerical integration; we found no additional benefit compared to using Simpson’s rule which employs a polynomial approximation to the function.

In the Bayesian framework, to completely specify the model, we also need to choose prior distributions for the parameters in and . In the examples we explore here, we assume that 111With some selections of (e.g. ), it would be appropriate to allow . . Therefore, we propose Gamma and Log-Normal prior distributions for . Similarly, except for , the parameters of are strictly positive; therefore, Gamma or Log-Normal prior distributions are proposed there, as well. For , we suggest normal or uniform prior distributions. We could also imagine using a mixture prior distribution with a prior point mass at and a distribution for the rest of the support of to see how much support the data offers for a homogeneous Poisson process. This is similar to what Linderman and Adams (2014) proposed for the Hawkes process.

Our model fitting yields samples from the posterior distribution which are used to create realizations of the intensity function over . In Section 4, we discuss how these realizations are employed for model comparison. The models that we have presented here are too general to supply an overall Gibbs sampler. However, we do propose a Markov chain Monte Carlo (MCMC) model-fitting approach using the Metropolis-Hastings algorithm. To select proposal or candidate distributions for the parameters in the model, we use an adaptive MCMC where we begin our model fitting using a Metropolis-within-Gibbs update. Then, while still in the burn-in period, we change to adaptive multivariate Metropolis updates, following the strategy presented in Haario et al. (1999). We detail our model fitting approach in Appendix B.

4 Model Comparison

There is little literature on approaches for comparing evolutionary point process models and what exists is specifically for Hawkes process models. Proposals include using the Akaike information criterion (AIC) (Ogata, 1988) or using simulations from the model (Chen and Tan, 2018) with mean squared error. We elect to implement such comparison in the data space rather than in the parameter space so that we can compare what is predicted under a model with what was observed. Furthermore, when possible, we would prefer to do this out-of-sample, that is as a cross-validation, in order to avoid using the same data to fit the models and then also to compare models. This enables more critical model comparison.

Because there is some tradition for model comparison in parameter space, we also include the deviance information criterion (DIC) results in our model comparison. For our setting, with MCMC samples, , we define and . The effective model size or number of parameters is computed as . Then, the DIC is computed as . We will see that DIC performs well when comparing evolutionary point process models but fails to successfully discriminate evolutionary point process models from non-evolutionary models in some cases.

Since the intent of a self-exciting or self-inhibiting model is to capture short-term response at a given time to the history up to that time, we propose two approaches that rely on using the modeled conditional intensity to make short-term forecasts. The first approach uses the conditional intensity to approximate the probability of an event in a very short time window and compares this to the observed binary outcome of whether or not a point was present in that short window. The second uses the conditional intensity to obtain the expected number of events in a somewhat longer interval and compares this to the observed count of events in that interval. For the first criterion, consider a small change in time

. Then, since is negligible,

(7)

Assuming , we have a model-based approximation to the probability of an event in a small window, . These probabilities can be compared with observed binary outcomes over a collection of short windows. Note that we evaluate at , i.e., a midpoint-rule approximation to , rather than at because is not continuous at an event time .

This procedure can be carried out retrospectively or prospectively. Retrospectively, one would fit a model to the entire point pattern, and then the model would be assessed based on that single fit. Prospectively, the model would be fit many times, each time up to a selected choice of . Here, for computational convenience, we use the retrospective approach.

What needs to be specified are the choices of the ’s and the choices of the ’s, with no clear “best” strategy. We propose an approach motivated by distinguishing an evolutionary point pattern from a homogeneous Poisson process (HPP) with constant intensity . An evolutionary point process model, whether excitatory or inhibitory, behaves differently from an HPP right after an event. For a self-exciting model, we expect the intensity to exceed shortly following events. In contrast, the intensity of inhibitory models drops below following events. So, model comparison will be best served by using the set of observed event times

For each we seek to create two binary event probabilities, a arising under an HPP model and a arising under an evolutionary process model. We use and as probabalistice forecasts rather than converting them into a binary prediction. Thus, we implement model comparison by comparing the performance of the two sets of probabilities using probabilistic misclassification rates described below. To create these probabilities we first randomly draw a to associate with each . Let be the maximum likelihood estimate for under the HPP. Then, we draw 222Other selections may be justified. For example, self-exciting point processes can lead to , even though is less than one. To prevent information loss, one may choose to sample for or for ; however, we found little benefit to these selections., so that the probability of an event in is if the point pattern comes from an HPP. We then fix . We define to provide the probability of an event in if the point pattern comes from the evolutionary process model. These probabilities are associated with binary events for all intervals following . Here, we define if there is one or more event in and , otherwise.

Then, to summarize model performance over intervals following , we employ a probabilistic misclassification rate, which we abbreviate as PMR. The definition for this misclassification rate depends on whether we are comparing self-exciting models or self-inhibiting models to an HPP. The intuition for a self-exciting model is that it will elevate probabilities following events in if self-excitation is operating. Thus, misclassification would mean when is large. The intuition for the inhibitory models is the opposite, we expect smaller probabilities following . In this case, misclassification occurs when but is large. As a result, we have the following definitions of PMR for excitatory and inhibitory models, respectively:

(8)

When comparing a self-exciting or self-inhibiting model to an HPP, we compute the same PMR for the HPP, replacing with .

An attractive feature of this criterion is that we need not generate any posterior predictive realizations from the process. We only need to obtain the posterior mean probability of

. A potential criticism, as noted above, is the arbitrariness in the selection of the ’s. However, the randomization of the ’s and focusing our model comparison on behavior following events, i.e., using , helps to overcome this concern. As a last remark here, misclassification rates are customarily applied to a specified set of events with an associated set of outcomes for these events. Here, we have to create our own suitable events.

Turning to the second approach, we would now use the approximation in (7) to estimate the expected number of points in the time window , rather than to create a probability. This provides a predictive distribution which is then compared to the observed number of events in the window. Again, there is the issue of choice of and . Again, we use the event times in , as discussed above; however, we fix for all . For any , we denote the number of point observed in by and for a given dataset as . The windows for the binary approach were selected to create probabilities under an HPP. When counting the number of points, we consider somewhat longer intervals than used for the binary comparison, such that the expected number of points is at least one.

An important point to make here is that examining the predictive distribution of the number of points in is only appropriate for self-exciting model comparison. That is, we will compare the predictive distribution for with . Self-inhibiting models encourage , we expect decreased intensity (expected number of points) following events. Thus, in this case, comparing the predictive distribution for to is not more informative than the binary comparison discussed above. This was confirmed in our empirical studies for self-inhibiting models.

In this regard, we do model comparison using the posterior predictive distribution for the number of events rather than the posterior mean expected number of events. The latter would encourage the use of, e.g., a predictive mean square error criterion. The former requires the simulation of posterior predictive realizations and would suggest the use of the ranked probability score (RPS) criterion. We prefer to employ the RPS (Gneiting and Raftery, 2007) since it makes comparison with using the entire predictive distribution for rather than comparison with a point estimate of .

Rather than carrying out the computationally expensive Ogata simulation approach (Algorithm 1) many times for each interval , we use the approximation (7

) as the mean and simulate from a Poisson distribution to generate the posterior predictive distribution

for . Then, we compute RPS. The last expression gives expectations that are immediately amenable to Monte Carlo integration using the posterior predictive samples of . Because we utilize MCMC to fit our Bayesian model, we can obtain such posterior predictive samples. The resulting Monte Carlo approximation of RPS becomes (Krüger et al., 2016),

(9)

where is the number of MCMC samples used. Each has its own and predictive distribution for . So, we average over the values in (9) to obtain a single model comparison metric. The RPS can be computed in closed form if a single estimate, say the posterior mean, of is used (Jordan et al., 2017); however, this approach ignores the uncertainty in estimating .

5 Simulation Studies

5.1 Identifying Excitation and Inhibition

In this section, we present two simulation studies. In the first study, we examine 10 combinations of the parameters and of a Hawkes process with an exponential triggering function. For each parameter configuration, we simulate 1000 point patterns from that model on using Algorithm 1 in Appendix C and fit a homogeneous Poisson process and an evolutionary point process model to the simulated data. In the second study, we essentially follow the same procedure except we simulate datasets from inhibitory models (i.e., models with ) using the power link with . For both self-exciting and inhibitory models, we fix the decay rate to . In the studies, we compare the homogeneous Poisson process to an evolutionary point process using DIC and PMR, and we also use RPS for Hawkes processes.

For the Hawkes process simulation, we use combinations of and to examine the degree of baseline activity and excitation needed for an evolutionary model to outperform an HPP. Note that the excitation levels are very low. The results from the simulation study are given in Table 3.

We find here that both PMR and RPS reveal significantly better predictive improvement for the self-exciting models relative to the HPP, correctly selecting the Hawkes process models over the HPP almost always across the replicate datasets. PMR improves slightly as increases. Neither PMR nor RPS benefit from the roughly doubling of the sample size for vs. . DIC seems ineffective. In fact, it only preferred the Hawkes process more often than the HPP for three parameter combinations: (, ); (, ); (, ). In all other cases, DIC would have selected the HPP more often than the Hawkes process.

Averages under Hawkes Averages under HPP
Parameters Average DIC PMR RPS DIC PMR RPS
, 50.571 170.546 0.304 0.488 171.110 0.356 0.519
, 51.414 171.475 0.304 0.496 172.179 0.357 0.529
, 52.231 172.511 0.304 0.500 173.266 0.359 0.536
, 54.188 174.585 0.301 0.508 175.569 0.362 0.552
, 55.344 175.930 0.299 0.512 176.891 0.364 0.558
, 100.917 200.433 0.310 0.496 200.997 0.357 0.522
, 103.052 200.046 0.308 0.501 200.828 0.360 0.531
, 105.359 199.689 0.305 0.505 200.660 0.359 0.536
, 107.497 199.122 0.302 0.510 200.444 0.359 0.545
, 110.157 198.657 0.302 0.512 199.785 0.361 0.548
Table 3: Simulation results for mildly self-exciting models. For each model and parameter combination, we present the average number of events , DIC, PMR, and RPS.

For the inhibitory process simulation, we use combinations of and to see which how much deviation from an HPP, in this case toward regularity, is neeeded for an evolutionary model to outperform an HPP. Compared to our Hawkes process simulations, significantly more inhibition was required to effectively distinguish self-inhibiting models from an HPP. The results are presented in Table 4.

Here, we can effectively select the evolutionary model with both PMR and DIC. In this case, the ability to distinguish models is less dependent on , more dependent on . For instance, PMR for the Hawkes model and the HPP are close when but becomes much more consequential as becomes more negative. Similar behavior emerges for DIC.

Averages under Hawkes Averages under HPP
Parameters Average DIC PMR DIC PMR
, 181.795 138.706 0.416 147.435 0.422
, 154.053 159.846 0.401 176.304 0.425
, 133.997 174.070 0.372 191.070 0.427
, 118.254 179.837 0.340 198.500 0.433
, 106.049 175.626 0.304 201.341 0.433
, 454.586 -477.183 0.415 -466.340 0.419
, 386.027 -284.721 0.407 -269.405 0.421
, 334.058 -154.969 0.391 -136.152 0.423
, 295.549 -67.016 0.373 -47.768 0.425
, 263.596 -4.731 0.355 17.902 0.426
Table 4: Simulation results for self-regulating models. For each model and parameter combination, we present the average number of events , DIC, and PMR.

Altogether, we can effectively select the generative evolutionary model that shows excitation and inhibition over the HPP model, even when excitation or inhibition is relatively weak. PMR emerges as the most effective criterion.

5.2 Identifying Link Functions

Here, we provide two different simulation studies. The only difference between the studies is the link function of the generative model. Our goal is to explore whether we can effectively distinguish link functions from a simulated point pattern. To do this, we simulate 1,000 point patterns on from each generative model using Algorithm 1 in Appendix C. We then fit each of the simulated datasets with four different models that differ in terms of the link function used.

Specifically, our generative models use the power link, one with and the other with . We then fit each simulated point pattern with models using the power link with and , the soft-plus function, and soft-plus link. For the power link with , we use , , and . For the power link with , we use , , and . These models all exhibit significant excitation regardless of the link function. As before, we use DIC, PMR, and to compare models. The results are presented in Table 5.

We find that we cannot well distinguish link functions from a simulated link function using DIC, PMR, or RPS. When simulating from the model using the power link with , there is almost no separation between any of the models, regardless of the criterion considered. When simulating from the model using the power link with , there is almost no separation between model except for the model using the power link with . The models using the power link with , the soft-plus, and soft-plus were hardly indistinguishable in terms of DIC, PMR, and RPS.

Average
Simulation Link Model Link n DIC PMR RPS
Power, Power, 341.748 -167.983 0.283 0.555
Power, -168.040 0.282 0.548
Soft-plus -167.115 0.286 0.550
Soft-plus -167.190 0.285 0.550
Power, Power, 460.745 -796.111 0.166 0.739
Power, -823.522 0.162 0.681
Soft-plus -822.086 0.162 0.678
Soft-plus -823.444 0.162 0.681
Table 5: Simulation results for different generative models, differing by the link function. For each generative model and model fit, we present the average number of events , DIC, PMR, and RPS. Dashes indicate that the average number of events did not change depending on the model fit.

Our explanation is that over the short windows we look at, the link functions produce very similar ’s, hence very similar ’s. Also, because the number of parameters is the same for all models considered, the likelihoods of the models are very similar. These results suggest that this discrimination task will not be successful given only a single observed point pattern, as will be the case in practice.

5.3 Evolutionary Log Gaussian Cox Process Models

In this simulation study, we simulate from a model where the background intensity is that of a log Gaussian Cox process (LGCP) using the power link with , i.e., the Tobit link. That is,

where , , , , and . We then add quickly-decaying excitation to this slowly changing GP in order to help to discriminate between these two components. Using this model, we simulate a dataset , yielding 1972 events, in total. Because the models considered here are much more computationally expensive to fit, we present the results from the single simulated dataset.

For model fitting, we use prior distributions with relatively high variance compared to the true parameters values:

, , , , and . Note that is no longer constrained to be positive because it is an argument within a natural exponent.

The goal here is to determine whether DIC, PMR, and RPS can be used to select the full model compared to models that exclude the GP or the evolutionary component. The results of this model comparison are given in Table 6.

DIC PMR RPS
GP + Evo -12043.250 0.047 1.171
GP-only -12146.550 0.088 1.302
Evo-only -11972.690 0.050 1.121
Table 6: Model comparison criteria for the evolutionary GP model, the GP-only model, and the evolutionary model.

The correct model had the lowest RPS and PMR, albeit by a small margin. However, both criteria consequentially appreciated the benefit of including the evolutionary component. In this regard, DIC valued the inclusion of the evolutionary component as well. Because DIC preferred the incorrect model by a substantial margin, it may be preferable to use our proposed intensity-based model comparison criteria when comparing models from different point process classes. Taking this a bit further, the fitted GP-only model had very fast decay (i.e., large ) even though the true value of was small. This is likely because the GP-only model required a larger to capture the excitation present in the model. On the other hand, the evolutionary GP model accurately estimated the evolutionary parameters and the GP parameters (see Table 7).

mean sd 5% 95%
1.436 0.447 0.828 2.106
0.891 0.026 0.850 0.934
19.319 1.231 17.397 21.436
1.256 0.798 0.536 2.354
0.085 0.066 0.0199 0.219
Table 7:

Posterior summaries (mean, standard deviation (sd), as well as the 5th and 95th percentile) for the evolutionary LGCP model.

6 Analysis of 2018 Violent Crimes in Chicago

In this data analysis, we consider a subset of violent crimes - homicides, assaults, and robberies - in Police District 11 of Chicago for the year 2018. These data are publicly available at https://data.cityofchicago.org/Public-Safety/Crimes-2001-to-present. Event times are reported to the nearest minute. In total, there are events in this dataset, including 41 observations that have the same time as at least one other event 333

To avoid ties, we jitter the event times by up to thirty seconds; that is, our data are taken to be a number uniformly distributed between thirty seconds before and thirty seconds after the reported minute.

. For interpretability of model parameters, we express our data as event times in hours from the beginning of the year. To be clear, we are not rounding event times to the nearest hour; each event keeps date-time information up to the second to which it was jittered. For example, an event recorded at 03:09:00 AM on January 3, 2018, occurs at hour 51.15051. Thus, our data belong to . Although our model is not spatial, we plot the locations of the violent crime events in police district 11 in Figure 2 to show the small geographic region that District 11 covers. Our primary goal is to assess whether violent crime is self-exciting or, perhaps, self-regulating in this small geographic area.

Figure 2: Map of violent crime events in police district 11.

The data show significant variability as a function of hour of the day and over the year. We illustrate these trends in Figure 3. Specifically, violent crimes happen most often between 10 A.M. and 4 P.M. and are least common 1:00 A.M. and 6 A.M. We also observe higher crime rates in May through September. Unsurprisingly, we see significant deviation from an HPP using a Kolmogorov-Smirnov test (). However, it is not clear whether the deviation from an HPP can be attributed to excitation/inhibition or daily/annual patterns in the crime data. To account for the clear daily seasonality in event intensity, as well as intensity changes over the year, we argue for modeling the background intensity using seasonal functions and a slowly varying Gaussian process as was used in Section 5.3.

We pose an evolutionary point process model for these data using (6), where we adopt the exponential triggering function for and the power link with for . We use the following form for the background function:

Here, the trigonometric terms are used to account for daily seasonality and the log-GP term addresses annual trends in the intensity of events.

An alternative specification could allow the GP to account for daily seasonality (Shirota and Gelfand, 2017; White and Porcu, 2019, see, e.g.,); however, using the exponential covariance function in one-dimension yields a scalable GP model (See Appendix B). Because we want to capture long-term changes in event intensity, we let be an evenly-spaced grid of 100 random effects spread over the year. We set to be equal to the element of to which is closest in time. Thus, can only capture relatively coarse changes in the event intensity. In this way, short-term changes in intensity are left to be explained by the evolutionary component of the model.

Figure 3: Patterns in the Chicago violent crime data (Left) as a function of hour of the day and (Right) over the year.

To complete our model specification, we supply prior distributions for all model parameters. Because there is roughly one event for every five or six hours and because the mean parameter is not constrained to be positive, we let . Because we allow that crime could be excitatory or inhibitory to the occurrence of future events but do not wish to “push” the model in either direction, we let . We assume that the effect of crime on future events would likely be limited to at most a few days so we let . For daily seasonality parameters, we assume that . In order that the GP captures long-term changes in event intensity, we assume that . Lastly, we assume because this prior distribution is diffuse and gives a closed-form posterior conditional distribution.

We fit the model using 30,000 MCMC iterations, where we discard the first 10,000 iterations as a burn-in period and use the remaining 20,000 samples for posterior inference. We provide posterior summaries - posterior mean, standard deviation, and 90% credible interval - for

, , , , , , and in Table 8. The values of are small but significantly positive, indicating that there is significant self-excitation present in the data. The relatively large values of suggest that the excitation is short-lived. To give more intuition into how these parameters impact the estimated intensity, we provide some visualizations.

mean sd CI_0.05 CI_0.95
-1.3361 0.0212 -1.3710 -1.3014
0.0081 0.0022 0.0050 0.0123
77.3800 14.6659 49.8852 97.8120
-0.3081 0.0300 -0.3567 -0.2589
-0.2697 0.0286 -0.3173 -0.2215
0.8771 0.7584 0.2600 2.1320
1.51e-5 1.64e-5 1.78e-6 4.52e-5
Table 8: Posterior summaries (mean, standard deviation (sd), as well as the 5th and 95th percentile) for the evolutionary LGCP model posed for the Chicago violent crime data.

In Figure 4, we provide plots that show the estimated seasonal component of our background intensity, the effect of the log-GP term, and the effect of the evolutionary component of our model following a single event. We first note that the daily seasonality captured by the model is similar to the pattern plotted in Figure 3, suggesting that our model is effectively capturing these daily patterns. In this figure, we estimate that violent crime events occur twice as often in the afternoon compared to the early morning hours. The log-GP also effectively captures the changes over the year that we noted in Figure 3.

There are two periods during the year when the intensity of crime events significantly deviates from the overall pattern. In January and February, we estimate the rate of violent crimes is 10% less than the average rate of the year. June, July, and August show violent crime rates approximately 15% above the average rate of the year. This is consistent with the expectation of increased crime rates in warmer weather. Lastly, we see that the estimated excitation lasts for only a few minutes. With such a short period, this self-excitation could be explained by paired violent events (e.g., fights that lead to multiple assault charges or multiple homicides from gang violence). This explanation is similar to that in Mohler et al. (2011) who also model crime with self-exciting point process models. Even though the effect of is significant in the model, this excitation is very mild. Given our posterior mean estimate of , we expect a single event to incite 1/100th of an event.

Figure 4: (Left) The estimated background intensity, excluding the log GP term, . (Center) The estimate log-GP term used to capture variation over the year. Here the dotted line shows (Right) The estimated evolutionary component of our model, following a single event. This illustrates the pattern of self-excitation estimated by the model. All intensities represent the expected number of events per hour.

7 Conclusion

We have explored model specification and model comparison for evolutionary point process models within a Bayesian framework. Through specification of link functions and trigger functions we can provide flexible models that exhibit excitation as well as inhibition. Through simulation, we have shown that we can distinguish a Poisson process specification from either of these behaviors. However, we can not distinguish link functions within a particular behavior. We have also enriched the Poisson process to a log Gaussian Cox process and the evolutionary processes as well, again to attempt to distinguish these behaviors. To compare models we have supplied out of sample prediction tools that enable investigation of short term departure after an event from conditionally independent events arising under a Poisson process. We have also provided an illustration using a crime dataset.

There is a variety of future work available here. For example, we could attempt the use of a spike and slab prior for model comparison, e.g., putting a prior on the point mass associated with the Poisson process and then learning from the posterior about the change in preference for the Poisson process. See Linderman and Adams (2014) for some discussion with application to the Hawkes process. We could also introduce marks. Such marks could arise discretely which will lead to evolutionary generalizations of multivariate Hawkes processes (see Chen et al., 2017; Farajtabar et al., 2017, for some discussion on this class of models). Alternatively, if we have continuous marks, e.g., spatial locations, then we could consider evolutionary generalizations of spatio-temporal Hawkes processes (see Reinhart, 2018, for a comprehensive review).

Appendix A Likelihood Derivation

We do this in terms of a conditional intensity, but it is trivially applicable to non-evolutionary models. For a point pattern over , we write the joint location density as . To define this, we use , the condition density of the arrival time given previous events, and , the conditional CDF for any time ,

(10)

where is the point observed previous to . As we did for intensity, we refer to and as and , respectively.

Together, and define the hazard function (the conditional probability of an arrival given the history),

(11)

which we show is equivalent to the intensity function. We can rewrite the hazard function as

Taking this equality and integrating both sides from the point to , where is the point directly previous to ,

(12)

By our assumption that the point pattern is simple, no points can appear exactly simultaneously. Thus, . In words, observing an additional point given that we have already observed an event at has zero probability. Thus, the last implication holds. We can rearrange these to get

(13)

These are then used to specify the joint likelihood of the observed point pattern.

We define , and because , we must also include the probability of not observing an event in , . We can now decompose the joint density of the point pattern conditionally as

(14)

Appendix B Model Fitting

Here, we present the Markov chain Monte Carlo sampler for the evolutionary LGCP model presented in Section 6. To fit simpler specifications where , one only needs to exclude updates to the GP components of the model. As discussed, our model fitting approach resembles the conditional intensity approach in Rasmussen (2013) because there is no cluster representation because our models may include inhibition that violates the Poisson additivity required for the cluster representation. To keep our models scalable, we have used the exponential triggering function which is amenable to recursive calculations (Ogata, 1978) shown below. Here, the log-likelihood of this model is

where

When , where and . This enables us to compute on .

The compensator cannot be computed in closed-form as in Rasmussen (2013) because we cannot simply bring the integral inside the non-linear link (operator) . Here, we compute the integral numerically using the trapezoidal rule

where , , and are evenly spaced between and . For this approximation, using the nearest we can use to compute with a computational cost that is linear in . Thus, we choose to be large, .

For computationally efficient GP prior distributions, we adopt the exponential covariance function. This corresponds to a continuous-time AR(1) process (see, e.g., Brockwell et al., 2007; White and Gelfand, 2019)

. Hence, the joint distribution of

can be peeled off sequentially, i.e.,

(15)

where with . Thus, even though we cannot integrate out , computing the prior distribution is not computationally expensive. To sample from the posterior of each random effect, we use the Metropolis algorithm with a Normal random walk, each with its own candidate variance tuned for acceptance rates between 0.15 and 0.6.

For a proper log-Gaussian Cox process model, we must evaluate the GP at each observed event time and over a grid to compute the compensator, which we do for Section 5.3. However, for Section 6, we use a 100 random effects, denoted spaced evenly across (0,T] so that the GP does not absorb possible short-term excitation in the data. Then, we let be equal to the element to that is closest to in time.

Again, because of the non-linear link , full conditional distributions cannot be derived in closed form. Therefore, we sample from the posterior distribution of , , , , and using the Metropolis algorithm using a Normal random walk proposal distribution, each with unique candidate variance. We use a burn-in of 10,000 iterations. For the first half of the burn-in period plus a period of 100 iterations (5,100 iterations, in this case), we sample from , , , , and

individually and tune the candidate variance so that acceptance rates are between 0.15 and 0.6. After this time when the Markov chain has had time to converge, we change our proposal distribution to a multivariate normal distribution with covariance is calculated using posterior samples collected after the first half of the burn-in period (after the 5,000th iteration). The covariance of the proposal distribution is updated throughout the entire sampler to ensure its convergence

(Haario et al., 1999). We found that this combination of univariate and then multivariate updates was effective in sampling from the posterior distribution of the model.

Appendix C Simulation of Evolutionary Point Processes

Because we rely heavily on simulation studies to illustrate the utility and limitations of our model comparison discussion. We provide a general approach for simulating evolutionary point patterns from Rasmussen (2018) based on the thinning methods from Ogata (1981) in Algorithm 1. This algorithm allows for both excitation and inhibition.

Input:

  • conditional intensity function

  • maximum proposal time

  • termination time

Output:

  • A realization of an evolutionary point process

1:; ;
2:while  do:
3:      Take
4:      Draw
5:      if  then:
6:            
7:      else
8:            Set
9:            Generate
10:            if  then:
11:                 
12:                 
13:                 
14:            end if
15:      end if
16:end while
17:if  then:
18:      return
19:else:
20:      return
21:end if
Algorithm 1 Ogata’s Thinning Method for Generating Evolutionary Point Process (Ogata, 1981)

References

  • C. Blundell, J. Beck, and K. A. Heller (2012) Modelling reciprocating relationships with Hawkes processes. In Advances in Neural Information Processing Systems, pp. 2600–2608. Cited by: §1.
  • A. Bray and F. P. Schoenberg (2013) Assessment of point process models for earthquake forecasting. Statistical Science, pp. 510–520. Cited by: §1.
  • P. Brémaud and L. Massoulié (1996) Stability of nonlinear Hawkes processes. The Annals of Probability, pp. 1563–1588. Cited by: §2, §3.
  • P. J. Brockwell, R. Davis, and Y. Yang (2007) Continuous-time Gaussian autoregression. Statistica Sinica 17, pp. 63–80. Cited by: Appendix B.
  • F. Chen and W. H. Tan (2018) Marked self-exciting point process modelling of information diffusion on Twitter. The Annals of Applied Statistics 12 (4), pp. 2175–2196. Cited by: §1, §1, §4.
  • S. Chen, A. Shojaie, E. Shea-Brown, and D. Witten (2017) The multivariate Hawkes process in high dimensions: beyond mutual excitation. arXiv preprint arXiv:1707.04928. Cited by: §1, §7.
  • Y. Chen, Q. Xin, V. Ventura, and R. E. Kass (2019) Stability of point process spiking neuron models. Journal of Computational Neuroscience 46 (1), pp. 19–32. Cited by: §3.
  • E. Chornoboy, L. Schramm, and A. Karr (1988) Maximum likelihood identification of neural point process systems. Biological Cybernetics 59 (4), pp. 265–275. Cited by: §1.
  • S. Dachian and Y. A. Kutoyants (2008) On the goodness-of-fit tests for some continuous time processes. In Statistical Models and Methods for Biomedical and Technical Systems, pp. 385–403. Cited by: §1.
  • D. J. Daley and D. Vere-Jones (2003) An introduction to the theory of point processes, vol. 1. Springer, New York. Cited by: §2.
  • P. Embrechts, T. Liniger, and L. Lin (2011) Multivariate Hawkes processes: an application to financial data. Journal of Applied Probability 48 (A), pp. 367–378. Cited by: §1.
  • M. Farajtabar, Y. Wang, M. G. Rodriguez, S. Li, H. Zha, and L. Song (2017) Coevolve: a joint point process model for information diffusion and network co-evolution. Journal of Machine Learning Research, pp. 1–49. Cited by: §1, §7.
  • E. W. Fox, F. P. Schoenberg, and J. S. Gordon (2016) Spatially inhomogeneous background rate estimators and uncertainty quantification for nonparametric Hawkes point process models of earthquake occurrences. The Annals of Applied Statistics 10 (3), pp. 1725–1756. Cited by: §1.
  • F. Gerhard, M. Deger, and W. Truccolo (2017) On the stability and dynamics of stochastic spiking neuron models: nonlinear Hawkes process and point process GLMs. PLoS Computational Biology 13 (2), pp. e1005390. Cited by: §3.
  • T. Gneiting and A. E. Raftery (2007) Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association 102 (477), pp. 359–378. Cited by: §4.
  • H. Haario, E. Saksman, and J. Tamminen (1999) Adaptive proposal distribution for random walk Metropolis algorithm. Computational Statistics 14 (3), pp. 375–396. Cited by: Appendix B, §3.1.
  • N. R. Hansen, P. Reynaud-Bouret, and V. Rivoirard (2015) Lasso and probabilistic inequalities for multivariate point processes. Bernoulli 21 (1), pp. 83–143. Cited by: §1.
  • A. G. Hawkes and D. Oakes (1974) A cluster process representation of a self-exciting process. Journal of Applied Probability 11 (03), pp. 493–503. Cited by: §3.1.
  • A. G. Hawkes (1971a) Point spectra of some mutually exciting point processes. Journal of the Royal Statistical Society. Series B (Statistical Methodology), pp. 438–443. Cited by: §1.
  • A. G. Hawkes (1971b) Spectra of some self-exciting and mutually exciting point processes. Biometrika, pp. 83–90. Cited by: §1, §2.
  • V. Isham and M. Westcott (1979) A self-correcting point process. Stochastic Processes and Their Applications 8 (3), pp. 335–347. Cited by: §2.
  • A. Jordan, F. Krüger, and S. Lerch (2017) Evaluating probabilistic forecasts with the R package scoringRules. arXiv preprint arXiv:1709.04743. Cited by: §4.
  • F. Krüger, S. Lerch, T. L. Thorarinsdottir, and T. Gneiting (2016) Probabilistic forecasting and comparative model assessment based on Markov chain Monte Carlo output. ArXiv preprint arXiv:1608.06802. Cited by: §4.
  • S. Linderman and R. Adams (2014) Discovering latent network structure in point process data. In International Conference on Machine Learning, pp. 1413–1421. Cited by: §1, §3.1, §7.
  • H. Mei and J. M. Eisner (2017) The neural Hawkes process: a neurally self-modulating multivariate point process. In Advances in Neural Information Processing Systems, pp. 6754–6764. Cited by: §1, §3.1.
  • P. Meyer (1971) Démonstration simplifiée d’un théorème de Knight. Séminaire de probabilités de Strasbourg 5, pp. 191–195. Cited by: §1.
  • G. O. Mohler, M. B. Short, P. J. Brantingham, F. P. Schoenberg, and G. E. Tita (2011) Self-exciting point process modeling of crime. Journal of the American Statistical Association 106 (493), pp. 100–108. Cited by: §1, §6.
  • Y. Ogata (1978) The asymptotic behaviour of maximum likelihood estimators for stationary point processes. Annals of the Institute of Statistical Mathematics 30 (1), pp. 243–261. Cited by: Appendix B, §3.1.
  • Y. Ogata (1981) On Lewis’ simulation method for point processes. IEEE Transactions on Information Theory 27 (1), pp. 23–31. Cited by: Appendix C, Algorithm 1.
  • Y. Ogata (1988) Statistical models for earthquake occurrences and residual analysis for point processes. Journal of the American Statistical Association 83 (401), pp. 9–27. Cited by: §1, §1, §4.
  • Y. Ogata (1998) Space-time point-process models for earthquake occurrences. Annals of the Institute of Statistical Mathematics 50 (2), pp. 379–402. Cited by: §1.
  • J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. Chichilnisky, and E. P. Simoncelli (2008) Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature 454 (7207), pp. 995. Cited by: §1.
  • J. G. Rasmussen (2013) Bayesian inference for Hawkes processes. Methodology and Computing in Applied Probability 15 (3), pp. 623–642. Cited by: Appendix B, Appendix B, §2, §3.1.
  • J. G. Rasmussen (2018) Lecture notes: temporal point processes and the conditional intensity function. arXiv preprint arXiv:1806.00221. Cited by: Appendix C, §1.
  • A. Reinhart (2018) A review of self-exciting spatio-temporal point processes and their applications. Statistical Science 33 (3), pp. 299–318. Cited by: §7.
  • S. Shirota and A. E. Gelfand (2017) Space and circular time log Gaussian Cox processes with application to crime event data. The Annals of Applied Statistics 11 (2), pp. 481–503. Cited by: §6.
  • A. Simma and M. I. Jordan (2012) Modeling events with cascades of Poisson processes. arXiv preprint arXiv:1203.3516. Cited by: §2.
  • J. Tobin (1958) Estimation of relationships for limited dependent variables. Econometrica: Journal of the Econometric Society, pp. 24–36. Cited by: §1.
  • P. A. White and A. E. Gelfand (2019) Multivariate functional data modeling with time-varying clustering. arXiv preprint arXiv:1904.11518. Cited by: Appendix B.
  • P. White and E. Porcu (2019) Nonseparable covariance models on circles cross time: a study of Mexico City ozone. Environmetrics 30 (5), pp. e2558. Cited by: §6.
  • L. Zhu (2013) Central limit theorem for nonlinear Hawkes processes. Journal of Applied Probability 50 (3), pp. 760–771. Cited by: §2.