Modeling event cascades using networks of additive count sequences

07/23/2018 ∙ by Shinsuke Koyama, et al. ∙ 0

We propose a statistical model for networks of event count sequences built on a cascade structure. We assume that each event triggers successor events, whose counts follow additive probability distributions; the ensemble of counts is given by their superposition. These assumptions allow the marginal distribution of the count sequences and the conditional distribution of the event cascades to have analytic forms. We present our model framework using Poisson and negative binomial distributions as the building blocks. Based on this, we describe a statistical method for estimating the model parameters and the event cascades from the observed count sequences.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This study concerns modeling and inference of event cascades, which ensue when events cause other events to occur, thus triggering further events. Example events in this context include chemical reactions [22]

, neuronal firing

[2], earthquakes [16], sending an email [4], posting and sharing content on social networking services [11, 26], and urban crime [14]. Because event cascades are universal in a wide variety of systems, its comprehension is essential for understanding the emergence of complex phenomena [1, 21].

Self-exciting and mutually exciting point processes (i.e., Hawkes processes) are widely used for modeling and analyzing event sequences [7, 6]. The rate at which events occur in these models is partitioned into two components: a background rate describing an exogenous effect (e.g., trends), and a mutually exciting component where events trigger an increase in the process rate. Hawkes processes exhibit rich dynamic behavior in terms of event cascades due to the latter component [18, 19]. Social data mining has received much attention[10, 25], where modeling and inference of social networks built upon Hawkes processes form active research areas [13, 20, 24, 27].

Whereas Hawkes processes describe a series of events in continuous time, real data are often aggregated within consecutive periods (e.g., day or week) within which the timing of each event is lost, resulting in sequences of event count data in discrete time (Figure 1). Hawkes processes could be applied to analyze such data by neglecting the precise timing of events within each period (e.g., [12]), but it is more desirable to use a statistical model that directly accounts for count data.

In this paper, we propose a statistical model for count sequence networks that possesses a cascade structure. The key assumptions made in this model are that (i) the event counts triggered by preceding events follow additive probability distributions, and (ii) the ensemble of observed events is given by their superposition. These assumptions allow the marginal distribution of count sequences and the conditional probability distribution of the event cascades, given the count sequences, to take analytic forms. We illustrate our modeling framework using Poisson and negative binomial distributions, which cover a broad range of variability in event counts. Based on the proposed model, we develop a statistical method to estimate the event cascades and model parameters. The proposed method is then applied to simulated event data.

Figure 1: “Coarse-graining” of event sequences. Event times () in continuous time are aggregated within consecutive periods of length , resulting in a sequence of event counts () in discrete time.

2 Statistical model

2.1 Model construction

We consider a multivariate time series with components, for (, being the initial time), where represents the count of events at . Here, stands for the th component at time . We suppose that the events at are partitioned into two groups: events occurring because of the background rate and those triggered by preceding events. We make the following assumptions for these two groups of events.

  • Let be the event count occurring due to the background rate, and let be its distribution. The expected value of is given by .

  • Let denote the count of events triggered by the preceding event at , and let be its distribution, conditioned on . The expectation of is given by

    (1)

    where is the strength of the influence from the th to th component, and is the kernel function satisfying for (i.e., causality) and .

Because the total number of events at is given by , the following equality holds:

(2)

Throughout this paper, we use the following notation:

We also make the following assumption:

  • Given the preceding events, , the event counts at time , , and are statistically independent for , , and .

From assumptions (I)–(III), the probability distribution of , conditioned on the preceding events is

(3)

Given an initial probability distribution of at time ,

(4)

the joint probability distribution of the complete data is obtained as

(5)

When only the event counts at each are observed, we require the probability distribution of , which is obtained from Eq. (5) as follows. Let be a set of satisfying Eq. (2),

(6)

and define

(7)

Then, the probability distribution of is obtained by marginalizing Eq. (5) over ,

(8)

The summation in the right hand side of Eq. (8) is generally difficult to calculate. However, it can be calculated analytically if the probability distributions and are additive.

Definition 1

A family of probability distributions is called additive if the distribution of the sample sum for a random sample of size from belongs to the family itself with the parameter .

Theorem 2

If and follow additive probability distributions, and , respectively, and becomes

(9)

where

(10)

A proof of this theorem is given in A.

Given , the conditional probability distribution of can also be factorized as follows using the additive probability distributions:

(11)

2.2 Stability condition

The stability condition for our model is derived as follows. Assume the process started a long time before (), and let denote the expectation of the rate. The expectation of Eq. (10) leads to

(12)

The Z-transform of Eq. (

12) is

(13)

Eq. (13

) can be rewritten in vector form as

, where is the influence matrix, from which the Z-transform of the expected rate is obtained:

(14)

Thus, the spectral radius of

, defined by the maximum of the absolute value of the eigenvalues of

, must be smaller than unity in order for the expected rate to be finite.

Indeed, under this condition, Eq. (14) is expressed as

(15)

The expected rate is obtained using the inverse of Z-transform,

(16)

where ‘’ represents convolution.

2.3 Additive probability distributions

We provide two additive probability distributions that can be used in our modeling framework.

2.3.1 Poisson distribution

It is well known that the Poisson distribution has an additive property. The probability distribution function of a Poisson distribution is

(17)

The mean and variance are given by

; the equality of the mean and variance is an important property of the Poisson distribution. The conditional probability distribution (11) is derived as a product of multinomial distributions, i.e.,

(18)

where and . The conditional mean and variance of each element of are, respectively, given by

(19)

and

(20)

Inherent in the Poisson distribution is the requirement that events are independent of one another. Thus, the Poisson distribution would be adequate for modeling coarse-grained data when successive events are independent of each other.

2.3.2 Negative binomial distribution

We consider the negative binomial (NB) distribution in the following form:

(21)

where is the Gamma function. The properties of the NB distribution are summarized in B. The NB distribution is additive with mean and variance given by and , respectively. Note that the variance is greater than the mean, and the extra variability is controlled by . The NB distribution converges to a Poisson distribution as . The conditional probability distribution (11) using the NB distribution is derived as a product of Dirichlet-multinomial (DM) distributions as follows:

(22)

The conditional mean and variance of each element of are, respectively, given by

(23)

and

(24)

where

(25)

Compared with Eqs. (19) and (20), we see that the variance of the DM distribution is greater than that of the multinomial distribution. The DM distribution (22) converges to the multinomial distribution (18) as . The properties of the DM distribution are summarized in C.

The Poisson assumption is violated if events positively correlate with each other, resulting in over-dispersion characterized by the count variance being greater than the mean. Therefore, the NB distribution may be appropriate when successive events are positively correlated.

Note that the NB distribution with a value of close to zero is statistically indistinguishable from the Poisson distribution. In this sense the Poisson distribution is a variety of the NB distribution for .

2.4 Inference of event cascades

We consider a situation in which only the event counts are given for the data, and are treated as latent variables. Thus, we wish to estimate from . For simplicity, we assume that the background rate is constant in time. We express the probability distributions as and , where is the set of parameters in (Eq. (10)). The estimation method consists of two steps: (i) estimate the parameters and , and (ii) estimate the latent variables using the estimated parameters and .

The parameters are estimated from the data based on the conventional maximum likelihood (ML) principle. The log-likelihood function of the parameters is expressed using the additive probability distribution as follows:

(26)

and its derivatives are

(27)

where is given by the NB distribution (21) for and by the Poisson distribution (17) for . The optimal parameters and are determined by maximizing the log-likelihood function under the constraint where and are non-negative. This optimization can be performed using standard numerical techniques [23].

With the estimated parameters, the latent variables are estimated based on the conditional probability distribution, . The conditional expectation provides an estimate with minimum squared error [3].

3 Simulation study

We applied our method to synthetic data in order to examine the extent to which our method can extract event cascades. We generated data from the probability distribution (5) with components using the NB distribution. We used an exponential function for the kernel () with time constant . The background rates were set to for . The elements of the matrix

were generated from a gamma distribution whose mean and shape parameters were 0.05 and 0.4, respectively (Figure 

2a).

Simulations were performed via the following steps. First, the model was simulated over a time interval of to in order to generate samples for and (Figure 2b). We then estimated the parameters , , , and using the ML method, and the latent variable was estimated from (Figure 2cd). We repeated these steps while varying the simulation interval and dispersion parameter .

Figure 2: (a) Matrix used for simulation and (b) sample of event counts (circle) and cascades (arrow). The diameter of the circle and width of the arrow are proportional to the event counts. (c) Estimated matrix and (d) estimated cascades , which were estimated from data in .

The estimation performance was quantified using the mean-square error (MSE) between the true and estimated parameters. To compute the MSE for the parameter , we performed the simulation with repetitions for each set of parameter values. Denoting the estimate in the th repetition by , the MSE was computed as follows:

(28)

where represents the Euclidean norm (i.e., the Frobenius norm for ). The first and second terms on the second line of Eq. (28) are the bias and variance, respectively.

The results are summarized in Figure 3. We see that bias and variance decrease as increases. The bias is an order of magnitude smaller than the variance in and ; those values are comparable in . However, the bias in is greater than the variance; this indicates that the estimate of the time constant is relatively less accurate.

Once we determine the optimal parameter values, we can compute the conditional expectation of the latent variables , from which the detailed statistical characteristics of the event cascades can be extracted. The total number of triggered events is estimated as follows:

(29)

Figure 4 shows a scatter plot of against the true value, from which we confirm that the number of triggered events is estimated reasonably well.

We may define the “size of event cascades” as

(30)

which represents the expected number of events triggered by events at . Figure 5

(top panel) shows an empirical cumulative distribution function of the estimated size

alongside that of the true size. These two empirical distributions can be compared visually using a quantile-quantile (Q-Q) plot, which is constructed by plotting the quantile for the estimated size against that for the true size (Figure 

5, bottom panel). We see that the points approximately lie on a line, confirming that the two distributions agree overall. The disagreement in the higher estimated quantiles indicates that the frequency of large event cascades tends to be underestimated.

Figure 3: Bias (top) and variance (bottom) in , , , and as a function of the simulation interval . Circles, crosses, and upward triangles represent those for , , and , respectively. The results in this figure were computed by averaging results from 100 simulations. The bias and variance decrease as increases.
Figure 4: Scatter plot of the total number of expected triggered events against the true value for (left), (center), and (right). Points approximately lie along a diagonal line, indicating that the total number of triggered events is estimated well.
Figure 5: Top: Empirical cumulative distribution function of the size of event cascades (dotted line) and that of the estimated size of event cascades (solid line) for (left), (middle), and (right). Bottom: Corresponding Q-Q plot of the two distributions. Points approximately lie along a diagonal line, indicating that two distributions agree.

4 Discussion

In this study, we propose a statistical model for networks of event count sequences built on a cascade structure. The key to our modeling framework is the use of additive probability distributions as the building blocks. Their convolution property allows the marginal distribution of the count sequences and the conditional distribution of the event cascades to take analytic forms. We presented our method with the two additive probability distributions: the Poisson and NB distributions. Using these two distributions, the conditional distributions of the event cascades are found to be multinomial and Dirichlet-multinomial distributions, respectively.

The data (i.e., measurements) we considered form count sequences in discrete time. Such measurements may be obtained via “coarse-graining” of the underlying event sequences in continuous time (Figure 1). Our model becomes a Hawkes process in the continuous time limit. Thus, the Hawkes model works as well as our model when the number of events falling in each time window is only 0 or 1. The latter would be preferred to the former when the time resolution is not fine enough to resolve individual events.

In general, our model may be validated against real data when the time resolution of the data series is finer than the time scale of event cascades inherent in the underlying point processes. When this is not the case, events occurring within the same time window may no longer be independent across different nodes of the networks, violating the model assumption (III).

We demonstrated our method with simulated data, in which all entries of the influence matrix

were independent random variables drawn from the same distribution. In practical situations, however, the influence matrix may be structured, e.g., the diagonal entries of

might be larger than the off-diagonal entries, reflecting the fact that a given event could more easily trigger a later event of the same type. We would expect that our method works just as well for estimating the structured matrix if enough data is available based on the optimality of the maximum likelihood principle.

Further development may be required to apply this technique to analyzing real data. First, for large scale networks, it is necessary to develop statistical methods to estimate the matrix from a limited amount of data because the conventional ML method may fail [5]. Second, we assumed that the background rate is constant in time, but such an assumption may not be valid in a situation where nonstationary effects (e.g., seasonality and trends) are not negligible [17]. Inference of a time-dependent background rate will pose a challenging problem.

Acknowledgments

This study was supported in part by MEXT as Exploratory Challenges on Post-K computer (Studies of Multi-level Spatiotemporal Simulation of Socioeconomic Phenomena), and Grant-in-Aid for Scientific Research (KAKENHI) by JSPS Grant Number 17H02041.

Appendix A Proof of Theorem 2

Notice that depends only on through . Thus, it can be expressed as . Using Eqs. (6) and (7), Eq. (8) is calculated as follows:

(31)

Using additive probability distributions and their convolution property leads to

(32)

where and are given by Eqs. (2) and (10), respectively. Substituting Eq. (32) into Eq. (31) yields Eq. (9).

Appendix B Negative binomial distribution

Here, we summarize the properties of the NB distribution used in this paper. See [8, 9] for a comprehensive review. The probability distribution function of an NB distribution is usually expressed in the following form:

(33)

which is conventionally interpreted as the probability of the number of successes before failures occur in a series of independent Bernoulli trials with success probability . Note that is taken as a real number greater than 0, despite this interpretation. The NB distribution is also derived from a Poisson-gamma mixture distribution. The cumulant generating function (CGF) of Eq. (33) is given by

(34)

from which the mean and variance are and , respectively. By changing the parameters from to with

(35)

we obtain Eq. (21). Accordingly, the CGF is expressed as

(36)

The additivity of the NB distribution is easily confirmed using Eq. (36) as follows: Suppose that are independent and identically distributed with . The resulting CGF of is given by

(37)

which is the CGF of .

Expanding Eq. (36) up to leading order in yields

(38)

Thus, we obtain the CGF of the Poisson distribution (17) for .

Appendix C Dirichlet-multinomial distribution

In this appendix, we summarize several properties of the Dirichlet-multinomial (DM) distribution and provide additional insight. The probability distribution function of the DM distribution is expressed as

(39)

where and . The mean and variance of are given by

(40)

The DM distribution is conventionally derived as a compound distribution of Dirichlet and multinomial distributions [15]. We provide another derivation here. Let be independent and identically distributed with additive probability distributions . From the additivity property, follows with . Given that , the conditional distribution of is

(41)

Using the NB distribution (21) for , Eq. (41) becomes

(42)

which is the DM distribution (39) with . Therefore, the DM distribution is the conditional distribution derived from the NB distribution.

Note that if we use the Poisson distribution (17) for , the conditional distribution (41) becomes a multinomial distribution:

(43)

Thus, it follows from the convergence of the NB distribution to the Poisson distribution that the DM distribution (42) converges to the multinomial distribution (43) for . Table 1 summarizes the relationship between the four distributions.

Additive distribution Conditional distribution
Poisson (17) Multinomial (43)
() ()
Negative binomial (21) Dirichlet-multinomial (42)
Table 1: Relationship between the four distributions.

References

References

  • [1] P. Bak. How nature works: the science of self-organized criticality. Copernicus, 1996.
  • [2] J. M. Beggs and D Plenz. Neuronal avalanches in neocortical circuits. Journal of Neuroscience, 23:11167–11177, 2003.
  • [3] J. O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer, 2nd edition, 1993.
  • [4] E. W. Fox, M. B. Short, F. P. Schoenberg, K. D. Coronges, and A. L. Bertozzi. Modeling e-mail networks and inferring leadership using self-exciting point processes. Journal of the American Statistical Association, 111:564–584, 2016.
  • [5] T. Hastie, R. Tibshirani, and M. Wainwright. Statistical Learning with Sparsity: The Lasso and Generalizations. Chapman and Hall/CRC, 2015.
  • [6] A. G. Hawkes. Point spectra of some mutually exciting point processes. Journal of the Royal Statistical Society. Series B, 33:438–443, 1971.
  • [7] A. G. Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58:83–90, 1971.
  • [8] J. M. Hilbe. Negative Binomial Regression. Cambridge University Press, 2nd edition, 2011.
  • [9] B. Jorgensen. The Theory of Dispersion Models. Chapman and Hall, 1997.
  • [10] M. Kaya, O. Erdogan, and J. Rokne, editors. From Social Data Mining and Analysis to Prediction and Community Detection. Springer, 2017.
  • [11] R. Kobayashi and R. Lambiotte. Tideh: Time-dependent Hawkes process for predicting retweet dynamics. In ICWSM 2016, 2016.
  • [12] E. Lewis, G. Mohler, P. J. Brantingham, and A. Bertozzi. Self-exciting point process models of civilian deaths in Iraq. Security Journal, 25:244–264, 2012.
  • [13] S. W. Linderman and R. P. Adams. Discovering latent network structure in point process data. In ICML, 2014.
  • [14] G. O. Mohler, M. B. Short, P. J. Brantingham, F. P. Schoenberg, and G. E. Tita. Self-exciting point process modeling of crime. Journal of the American Statistical Association, 106:100–108, 2011.
  • [15] K. W. Ng, G.-L. Tian, and M.-L. Tang. Dirichlet and Related Distributions: Theory, Methods and Applications. Wiley, 2011.
  • [16] Y. Ogata. Statistical models for earthquake occurrences and residual analysis for point processes. Journal of the American Statistical Association, 83:9–27, 1988.
  • [17] T. Omi, Y. Hirata, and K. Aihara. Hawkes process model with a time-dependent background rate and its application to high-frequency financial data. Physical Review E, 96:012303, 2017.
  • [18] T. Onaga and S. Shinomoto. Bursting transition in a linear self-exciting point process. Physical Review E, 89:042817, 2014.
  • [19] T. Onaga and S. Shinomoto. Emergence of event cascades in inhomogeneous networks. Scientific Reports, 6:33321, 2016.
  • [20] M. A. Rizoiu, L. Xie, S. Sanner, M. Cebrián, H. Yu, and P. Van Hentenryck. Expecting to be hip: Hawkes intensity processes for social media popularity. In WWW, 2017.
  • [21] D. Sornette. Critical Phenomena in Natural Sciences: Chaos, Fractals, Self organization and Disorder: Concepts and Tools. Springer, 2nd edition, 2006.
  • [22] N. G. van Kampen. Stochastic Processes in Physics and Chemistry. North Holland, 3nd edition, 2007.
  • [23] W.H.Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, 2nd edition, 1992.
  • [24] H. Xu and H. Zha. A Dirichlet mixture model of Hawkes processes for event sequence clustering. In NIPS, 2017.
  • [25] R. Zafarani, M. A. Abbasi, and H. Liu. Social Media Mining: An Introduction. Cambridge University Press, 2014.
  • [26] Q. Zhao, M. A. Erdogdu, H. Y. He, A. Rajaraman, and J. Leskovec. Seismic: A self-exciting point process model for predicting tweet popularity. In KDD’ 15, 2015.
  • [27] K. Zhou, H. Zha, and L. Song. Learning social infectivity in sparse low-rank networks using multi-dimensional Hawkes Processes. In AISTATS, 2013.