Burstiness Scale: a highly parsimonious model for characterizing random series of events

02/20/2016 ∙ by Rodrigo A S Alves, et al. ∙ 0

The problem to accurately and parsimoniously characterize random series of events (RSEs) present in the Web, such as e-mail conversations or Twitter hashtags, is not trivial. Reports found in the literature reveal two apparent conflicting visions of how RSEs should be modeled. From one side, the Poissonian processes, of which consecutive events follow each other at a relatively regular time and should not be correlated. On the other side, the self-exciting processes, which are able to generate bursts of correlated events and periods of inactivities. The existence of many and sometimes conflicting approaches to model RSEs is a consequence of the unpredictability of the aggregated dynamics of our individual and routine activities, which sometimes show simple patterns, but sometimes results in irregular rising and falling trends. In this paper we propose a highly parsimonious way to characterize general RSEs, namely the Burstiness Scale (BuSca) model. BuSca views each RSE as a mix of two independent process: a Poissonian and a self-exciting one. Here we describe a fast method to extract the two parameters of BuSca that, together, gives the burstyness scale, which represents how much of the RSE is due to bursty and viral effects. We validated our method in eight diverse and large datasets containing real random series of events seen in Twitter, Yelp, e-mail conversations, Digg, and online forums. Results showed that, even using only two parameters, BuSca is able to accurately describe RSEs seen in these diverse systems, what can leverage many applications.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

What is the best way to characterize random series of events (RSEs) present in the Web, such as Yelp reviews or Twitter hashtags? Descriptively, one can characterize a given RSE as constant and predictable for a period, then bursty for another, back to being constant and, after a long period, bursty again. Formally, the answer to this question is not trivial. It certainly must include the extreme case of the homogeneous Poisson Process (PP) [16], which has a single and intuitive rate parameter . Consecutive events of PP follow each other at a relatively regular time and represents the constant rate at which events arrive. The class of Poissonian or completely random processes includes also the case when varies with time. In this class, events must be without any aftereffects, that is, there is no interaction between any sequence of events [33]. There are RSEs seen in the Web that were accurately modeled by a Poissonian process, such as many instances of viewing activity on Youtube [7], e-mail conversations [23] and hashtag posts on Twitter [21].

Unfortunately, recent analyzes showed that this simple and elegant model has proved unsuitable for many cases [27, 9, 37, 36]. Such analyzes revealed that many RSEs produced by humans have very long periods of inactivity and also bursts of intense activity [2, 17], in contrast to Poissonian processes, where activities may occur at a fairly constant rate. Moreover, many RSEs in the Web also have strong correlations between historical data and future data [7, 37, 36, 10, 25], a feature that must not occur in Poissonian processes. These RSEs fall into a particular class of random point processes, the so called self-exciting processes [33]. The problem of characterizing such RSEs is that they occur in many shapes and in very unpredictable ways [36, 26, 11, 10, 41, 7]. They have the so called “quick rise-and-fall” property [26] of bursts in cascades, producing correlations between past and future data that becomes difficult to be captured by regression-based methods [40].

As pointed out by [7], the aggregated dynamics of our individual activities is a consequence of a myriad of factors that guide individual actions, which produce a plethora of collective behaviors. Thus, in order to accurately capture all patterns seen in human-generated random series of events, researchers are proposing models with many parameters and, for most of the times, tailored to a specific activity in a specific system [24, 41, 21, 43, 40, 26, 11, 10, 25, 29, 44, 42]. Going against this trend, in this work we propose the Burstiness Scale (BuSca) model, a highly parsimonious model to characterize RSEs that can be (i) purely Poissonian, (ii) purely self-exciting or (iii) a mix of these two behaviors. In BuSca, the underlying Poissonian process is responsible for the arrival of events related to the routine activity dynamics of individuals, whereas the underlying self-exciting process is responsible for the arrival of bursty and ephemeral events, related to the endogenous (e.g. online social networks) and the exogenous (e.g. mass media) mechanisms that drive public attention and generate the “quick rise-and-fall” property and correlations seen in many RSEs [7, 21]. To illustrate that, observe Figure 1, which shows the cumulative number of occurrences of three Twitter hashtags over time. In Figure 1a, the curve of hashtag #wheretheydothatat is a straight line, indicating that this RSE is well modeled by a PP. In Figure 1b, the curve of hashtag #cotto has long periods of inactivities and a burst of events, suggesting that the underlying process may be self-exciting in this case. Finally, In Figure 1c, the curve of hashtag #ta is apparently a mix of these two processes, exactly what BuSca aims to model.

(a) #wheretheydothatat
(b) #cotto
(c) #ta
Figure 1: Three reals individuals of twitter database

Besides that, our goal is also to characterize general RSEs using the least amount of parameters possible. The idea is to propose a highly parsimonious model that can separate out constant and routine events from bursty and ephemeral events in general RSEs. We present and validate a particular and highly parsimonious case of BuSca, where the Poissonian process is given by a homogeneous Poisson process and the self-exciting process is given by a Self-Feeding Process (SFP) [36]. We chose these models because (i) both of them require a single parameter and (ii) they are on opposite ends of the spectrum. The PP is on the extreme side where the events do not interact with each other and inter-event times are independent. On the other extreme lies the SFP, where consecutive inter-event times are highly correlated. Even though BuSca

has only two parameters, we show that, surprisingly, it is is able to accurately characterize a large corpus of diverse RSE seen in Web systems, namely Twitter, Yelp, e-mails, Digg, and online forums. We show that disentangling constant from ephemeral events in general RSEs may reveal interesting, relevant and fascinating properties about the underlying dynamics of the system in question in a very summarized way, leveraging applications such as monitoring systems, anomaly detection methods, flow predictors, among others.

In summary, the main contributions of this paper are:

  • BuSca, a widely applicable model that parsimoniously characterize communication time series with only two intuitive parameters and validated in eight different datasets. From these parameters, we can calculate the burstiness scale , which represents how much of the process is due to bursty and viral effects.

  • A fast and scalable method to separate events arising from a homogeneous Poisson Process from those arising from a self-exciting process in RSEs.

  • A method to detect anomalies and another to detect bursts in random series of events.

The rest of the paper is organized as follows. In Section 2, we provide a brief survey of the related work. Our model is introduced in Section 3

together with the algorithm to estimate its parameters. We show that the maximum likelihood estimator is biased and show to fix the problem in Section 

4, discussing a statistical test procedure to discriminate between extreme cases in Section 5. In Section 6, we describe the eight datasets used in this work and show the goodness of fit of our model in Section 7. A comparison with the Hawkes model is given in Section 8. We show two applications of our model in Section 9. We close the paper with Section 10, where we present our conclusions.

2 Related work

Characterizing the dynamics of human activity in the Web has attracted the attention of the research community [2, 7, 24, 39, 41, 21, 43, 40, 26, 11, 10, 25, 29, 44, 42] as it has implications that can benefit a large number of applications, such as trend detection [29], popularity prediction [22], clustering [8], anomaly detection [37], among others. The problem is that uncovering the rules that govern human behavior is a difficult task, since many factors may influence an individual’s decision to take action. Analysis of real data have shown that human activity in the Web can be highly unpredictable, ranging from being completely random [7, 5, 18, 19, 23, 24] to highly correlated and bursty [2, 17, 36, 26, 25, 42, 29, 44].

As one of the first attempts to model bursty RSE, Barabási et. al. [2] proposed that bursts and heavy-tails in human activities are a consequence of a decision-based queuing process, when tasks are executed according to some perceived priority. In this way, most of the tasks would be executed rapidly while some of them may take a very long time. The queuing models generate power law distributions, but do not correlate the timing of events explicitly. As an alternative to queuing models, many researchers started to consider the self-exciting point processes, which are also able to model correlations between historical and future events. In a pioneer effort, Crane and Sornette [7] modeled the viewing activity on Youtube as a Hawkes process. They proposed that the burstiness seen in data is a response to endogenous word-of-mouth effects or sudden exogenous perturbations. This seminal paper inspired many other efforts to model human dynamics in the Web as a Hawkes process [26, 25, 42, 29, 44]. Similar to the Hawkes process, the Self-Feeding process (SFP) [36] is another type of self-exciting process that also captures correlations between historical and future data, being also used to model human dynamics in the Web [36, 37, 10]. Different from Hawkes, whose conditional intensity explicitly depends on all previous events, the SFP considers only the previous inter-event time.

Although there are strong evidences that self-exciting processes are well suited to model human dynamics in the Web, there are studies that show that the Poisson process and its variations are also appropriate [7, 5, 18, 19, 23, 24]. [5, 18] showed that Internet traffic can be accurately modeled by a Poisson process under particular circustances, e.g. heavy traffic. When analyzing Youtube viewing activity, [7] verified that 90% of the videos analyzed either do not experience much activity or can be described statistically as a Poisson process. Malmgreen et al. [23, 24] showed that a non-homogeneous Poisson process can accurately describe e-mail communications. In this case, the rate varies with time, in a periodic fashion (e.g., people answer e-mails in the morning; then go to lunch; then answer more e-mails, etc).

These apparently conflicting approaches, i.e., self-exciting and Poissonian approaches, motivated many researchers to investigate and characterize this plethora of human behaviors found in the Web. For instance,  [35]

used a machine learning approach to characterize videos on Youtube. From several features extracted from Youtube and Twitter, the authors verified that the current tweeting rate along with the volume of tweets since the video was uploaded are the two most important Twitter features for classifying a Youtube video into viral or popular. In this direction,

[21] verified that Twitter hashtag activities may by continuous, periodic or concentrated around an isolated peak, while [11] found that revisits account from 40% to 96% of the popularity of an object in Youtube, Twitter and LastFm, depending on the application.  [43]

verified that the popularity of a Youtube video can go through multiple phases of rise and fall, probably generated by a number of different background random processes that are super-imposed onto the power-law behavior. The main difference between these models and ours is that the former ones mainly focus on representing all the details and random aspects of very distinct RSEs, which naturally demands many parameters. In our case, our proposal aims to disentangle the bursty and constant behavior of RSEs as parsimoniously as possible. Surprisingly, our model is able to accurately describe a large and diverse corpus of RSEs seen in the Web with only two parameters.

In this version, random series of events are modeled by a mixture of two independent processes: a Poisson process, which accounts for the background constant behavior, and a one-parameter SFP, which accounts for the bursty behavior. A natural question that arises is: how different is this model from the widely used Hawkes process? The main difference are twofold. First, in the Hawkes process, every single arriving event excites the process, i.e., is correlated to the appearance of future events. In our proposal, since the PP is independent, non-correlated events may arrive at any time. Second, our model is even more parsimonious than the Hawkes process, two parameters against three111Considering the most parsimonious version of the Hawkes process.. In Section 8 we quantitatively show that our proposed model is more suited to real data than the Hawkes process.

3 Modeling information bursts

Point processes is the stochastic process framework developed to model a random sequence of events (RSE). Let be a sequence of random event times, with , and be the random number of events in . We simplify the notation when the interval starts on by writing simply . Let be the random history of the process up to, but not including, time . A fundamental tool for modeling and for inference in point processes is the conditional intensity rate function. It completely characterizes the distribution of the point process and it is given by

(1)

The interpretation of this random function is that, for a small time interval , the value of is approximately the expected number of events in . It can also be interpreted as the probability that the interval has at least one event given the random history of the process up to . The notation emphasizes that the conditional intensity at time depends on the random events that occur previous to . This implies that is a random function rather than a typical mathematical function. The unconditional and non-random function is called the intensity rate function where the expectation is taken over all possible process histories.

The most well known point process is the Poisson process where . That is, the occurrence rate varies in time but it is deterministic such as, for example,

. The main characteristics of a Poisson process is that the counts in disjoint intervals are independent random variables with

having Poisson distribution with mean given by

. When the intensity does not vary in time, with , we have a homogeneous Poisson process.

3.1 Self-feeding process

The self-feeding process (SFP) [36] conditional intensity has a simple dependence on its past. Locally, it acts as a homogeneous Poisson process but its conditional intensity rate is inversely proportional to the temporal gap between the two last events. More specifically, the conditional intensity function is given by

(2)

where and . This implies that the inter-event times

are exponentially distributed with expected value

. The inter-event times follow a Markovian property. The constant is the median of the inter-event times and is the Euler constant. A more general version of the SFP uses an additional parameter which was taken equal to 1 in this work. The motivation for this is that, in many databases analysed previously [36] , it was found that . An additional benefit of this decision is the simpler likelihood calculations involving the SFP.

The Figure 1(a) presents three realizations of the SFP process in the interval with parameter . The vertical axis shows the accumulated number of events up to time . One striking aspect of this plot is its variability. In the first 40 time units, the lightest individual shows a rate of approximately 0.5 events per unit time while the darkest one has a rate of 2.25. Having accumulated a very different number of events, they do not have many additional points after time . The third one has a more constant rate of increase during the whole time period. Hence, with the same parameter , we can see very different realizations from the SFP process. A common characteristic of the SFP instances is the mix of bursty periods alternating with quiet intervals.

(a) SFP realizations
(b) Twitter data
Figure 2: Three instances of the SFP process with in the time interval (left) and a real time series with the #shaq hashtag from Twitter.

However, in many datasets we have observed a departure from this SFP behavior. The most noticeable discrepancy is the absence of the long quiet periods predicted by the SFP model. To be concrete, consider the point processes realizations in Figure 1(b). This plot is the cumulative counting of Twitter posts from the hashtag shaq. There are two clear bursts, when the series has a large increase in a short period of time. Apart from these two periods, the counts increase in a regular and almost constant rate. We do not observe long stretches of time with no events, as one would expect to see if a SFP process is generating these data.

3.2 The BuSca Model

In this work, we propose a point process model that exhibits the same behavior consistently observed in our empirical findings: we want a mix of random bursts followed by more quiet periods, and we want realizations where the long silent periods predicted by the SFP are not allowed. To obtain these two aspects we propose a new model that is a mixture of the SFP process, to guarantee the presence of random bursts, with a homogeneous Poisson process, to generate a random but rather constant rate of events, breaking the long empty spaces created by the SFP. While the SFP captures the viral and ephemeral “rise and fall” patters, the PP captures the routine activities, acting as a random background noise added to a signal point process. We call this model the Burstiness Scale (BuSca) model.

Figure 3 shows the main idea of BuSca. The observed events are those on the bottom line. They are composed by two types of events, each one generated by a different point process. Each observed event can come either from a Poisson process (top line) or from an SFP process (middle line). We observe the mixture of these two types of events on the third line without an identifying label. This lack of knowledge of the sourse process for each event is the cause of most inferential difficulties, as we discuss later.

Mixture

SFP

Poisson
Figure 3: The BuSca model. The top line displays the events from the Poisson component along the timeline while the middle line displays those from the SFP component. The user observes only the third line, the combination of the first two, without a label to identify the source process associated with each event.

A point process is called a simple point process if its realizations contain no coincident points. Since both, the SFP and the Poisson process, are simple, so it is the mixture point process. Also, this guarantees that each event belongs to one of the two component processes. Figure 4 shows different realizations of the mixture process in the time interval . In each plot, the curves show the cumulative number of events up to time . The blue line represents a homogeneous Poisson process realization with parameter while the green curve represents the SFP with parameter . The red curve represents the mixture of the two other realizations.

(a)
(b)
(c)
Figure 4: Realizations of the mixture process with different values for in the time interval [0,100)

3.3 The likelihood function

The log-likelihood function for any point process is a function of the conditional intensity and of the events :

(3)

The conditional intensity function of BuSca is the sum of the conditional intensities of the component processes, the Poisson intensity , and the SFP intensity :

(4)

The stochastic history of the mixed process contains only the events’ times but not their identifying component processes labels, either (from SFP) or (from PP), for each event. The log-likelihood function for the mixture process observed in the time interval is given by

(5)

The log-likelihood (5) is not computable because requires the knowledge of the last SFP inter-event time for each . This would be known only if the source-process label for each event in the observed mixture is also known. Since these labels are hidden, we adopt the EM algorithm to obtain the maximum likelihood estimates and . We define the burstiness scale as the percentage of bursty events in a given RSE. It can be estimated by . This gives the estimated proportion of events that comes from the pure SFP process. The latent labels are also inferred as part of the inferential procedure.

The use of the EM algorithm in the case of point process mixtures is new and presents several special challenges with respect to the usual EM method. The reason for the difficulty is the lack of independent pieces in the likelihood. The correlated sequential data in the likelihood brings several complications dealt with in the next two sections.

3.4 The E step

The EM algorithm requires the calculation of , the expected value of the log-likelihood (5) with respect to the hidden labels. Since

we have

(6)

A naive way to obtain the required

(7)

is to consider all possible label assignments to the events and its associated probabilities. Knowing which events belong to the SFP component, we also know the value of . Hence, it is trivial to evaluate in each one these assignments for any , and finally to obtain (7) by summing up all these values multiplied be the corresponding label assignment probabilities. This is unfeasible because the number of label assignments is too large, unless the number of events is unrealistically small.

To overcome this difficulty, we developed a dynamic programming algorithm. Figure (5) shows the conditional intensity up to, and not including, as green line segments and the constant Poisson intensity as a blue line. Our algorithm is based on a fundamental observation: if comes from the Poisson process, it does not change the current SFP conditional intensity until the next event comes in.

We start by calculating conditioning on the event label:

(8)

The evaluation of depends on the label assigned to . If , as in Figure 6, the last SFP inter-event time interval is the same as that for , since a Poisson event does not change the SFP conditional intensity. Therefore, in this case,

(9)

For the integral component in (6), we need the conditional intensity for in the continuous interval and not only at the observed values. However, by the same argument used for , we have

for .

The probability that the -th mixture event comes from the SFP component is proportional to its conditional intensity at the event time :

(10)

Therefore, using (9) and (10), we can rewrite (8) approximately as a recurrence relationship:

(11)

We turn now to explain how to obtain the last term in (11). If , as exemplified in Figure 7, the last SFP inter-event time must be updated and it will depend on the most recent SFP event previous to . There are only possibilities for this last previous SFP event and this fact is explored in our dynamic programming algorithm. Recursively, we condition on these possible possibilities to evaluate the last term in (11). More specifically, the value of is given by

(12)

When the last two events and come from the SFP process, we know that the conditional intensity of the SFP process is given by the first term in (12). The unknown expectation in (12) is obtained by conditioning in the label. In this way, we recursively walk backwards, always depending on one single unknown of the form

where . At last, we can calculate in (11) by the iterative expression

(13)

We have more than one option as initial conditions for this iterative computation. One is to assume that the first two events belong to the SFP. Another one is to use

and the first event comes from the SFP. Even with a moderate number of events, this initial condition choices affect very little the final results and either of them can be selected in any case.

To end the E-step, the log-likelihood in (6) requires also . This is calculated in an entirely analogous way as we did above.

3.5 The M step

Different from the E step, the M step did not require special development from us. Having obtained the log-likelihood (6) we simply maximize the likelihood and update the estimated parameter values of and . In this maximization procedure, we constrain the search within two intervals. For an observed point pattern with events, we use for . The intensity must be positive and hence, the zero lower bound represents a pure SFP process while the the upper bound represents the maximum likelihood estimate of in the other extreme case of a pure homogeneous Poisson process. For the parameter, we adopt the search interval . Since is the median inter-event time in the SFP component, a value induces a pattern with a very large of events while represents, in practice, a pattern with no SFP events.

0

Figure 5: Start

0

Figure 6:

0

Figure 7:

3.6 Complexity analysis

The E step calculation is represented by (6) and it has complexity , where is the number of events. The needed in (13) requires operations due to the product with factors which, when summed, will end up with . However, we simplify this calculation when we consider only the last 10 iterations of the product operator. It was possible because the sequential multiplication of probabilities reduces significantly the weight of the first events in the calculation of SFP intensity function. The results were very close to the non-simplified calculation but with operations. After the are calculated, the integral in (6) is simply the evaluation of the area under a step function with steps and therefore needs calculations. Substituting the terms in (6) by their complexity order and summing them, we find that the E step requires operations.

In the M step, the main cost is related to the number of runs that the maximization algorithm requires. We used coordinate ascent for each parameter and . The EM steps are repeated until convergence or a maximum of steps is reached. Therefore, the final complexity of our algorithm is . In our case, we used but, on average, the EM algorithm required 7.10 loops considering all real datasets described in Section 6. In 95% of the cases, it took less than 21 loops.

4 MLE bias and a remedy

Several simulations were performed to verify that the estimation of the parameters proposed in Section 3 is suitable. There is no theory about the MLE behavior in the case of point processes data following a complex mixture model as ours. For this, synthetic data were generated by varying the sample size and the parameters and of the mixture. We vary in for each pair . The parameters and were empirically selected in such a way that the expected percentage of points coming from the SFP process (denoted by the burstiness scale parameter ) varied in .

For each pair , we conducted 100 simulations, totaling 9,000 simulations. In each simulation, the estimated parameters were calculated by the EM algorithm. Since their range vary along the simulations, we considered their relative differences with respect to the true values . For , define

(14)

The main objective of this measure is to treat symmetrically the relative differences between and . Consider a situation where .This means that . Symmetrically, if , we have . The value implies that . We define analogously.

4.1 The estimator

The results of are shown in Figure 8. Each boxplot correspond to one of the 90 possible combinations of . The vertical blue lines separate out the different values of . Hence, the first 10 boxplots are those calculated to the combinations . The next 10 boxplots correspond to the values of for and . The absolute values for were censored at and this is represented by the horizontal red lines at heights and .

Figure 8: Boxplots of according to and .

The estimator

is well behaved, with small bias and variance decreasing with the sample size. The only cases where it has a large variance is when the total sample size is very small and, at the same time, the percentage of Poisson events is also very small. For example, with

and , we expect to have only 20 unidentified Poisson cases and it is not reasonable to expect an accurate estimate in this situation.

4.2 The estimador

The results of from the simulations are shown in Figure 16, in the same grouping format as in Figure 8. It is clear that overestimates the true value of in all cases with the bias increasing with the increase of the Poisson process share. In the extreme situation when , the large leads to an erroneous small expected number of SFP events in the observation time interval. Indeed, a mixture with Poisson process events only has . Additionally to the bias problem, the estimator also has a large variance when the SFP process has a small number of events.

We believe that the poor performance of the EM algorithm estimator is related to the calculation of the expected value of the likelihood function. This calculation was done using approximations to deal with the unknown events’ labels, which directly influences the calculus of the SFP stochastic intensity function. This influence has less impact for the , since the Poisson process intensity is deterministic and fixed during the entire interval.

Figure 9: Boxplots of for the usual MLE according to and .
Figure 10: Boxplots of for the improved estimator according to and .
Figure 9: Boxplots of for the usual MLE according to and .

As is the median inter-event time in a pure SFP process, a simple and robust estimator in this pure SFP situation is the empirical median of the intervals . Our alternative estimator for deletes some carefully selected events from the mixture, reducing the dataset to a pure SFP process and, then, taking the median of the inter-event times of the remaining events. More specifically, conditioned on the well behaved estimate, we generate pseudo events coming from a homogeneous Poisson process and within the time interval . Sequentially define

with where

and . This last constraint avoids the deletion to be entirely concentrated in bursty regions. We assume that the left over events in constitute a realization of a pure SFP process and we use their median inter-event time as an estimator of . As this is clearly affected by the randomly deleted events , we repeat this procedure many times and average the results to end up with a final estimate, which we will denote by .

The results obtained with the new estimator of can be visualized in Figure 10. Its estimation error is clearly smaller than that obtained directly by the EM algorithm, with an underestimation of only when the Poisson process component is dominant. This is expected because when the SFP component has a small percentage of events, its corresponding estimate is highly variable. In this case, there will be a large number of supposedly Poisson points deleted, remaining few SFP events to estimate the parameter, implying a high instability.

In Figure 11 we can see the two estimation errors simultaneously. Each dot represents one pair . They are concentrated around the origin and do not show any trend or correlation. This means that the estimation errors are approximately independent of each other and that the estimates are close to their real values.

Figure 11: versus .

5 Classification test

When analysing a point process dataset, a preliminary analysis should test if a simpler point process, comprised either by a pure SFP or a pure Poisson process, fits the observed data as well as the more complex mixture model. Let and the unconstrained parameter space be

. We used the maximum likelihood ratio test statistic

of

against the null hypothesis

where, alternatively, we consider either or to represent the pure Poisson and the pure SFP processes, respectively. Then

where the log-likelihood is given in (3). As a guide, we used a threshold to deem the test significant. As a practical issue, since taking the median inter-event time of the SFP process equal to is not numerically feasible, we set it equal to the length of the observed total time interval.

As there is one free parameter in each case, one could expect that the usual asymptotic distribution of

should follow a chi-square distribution with one degree of freedom. However, this classic result requires several strict assumptions about the stochastic nature of the data, foremost the independence of the observations, which is not the situation in our model. Therefore, to check the accuracy of this asymptotic distribution to gauge the test-based decisions, we carried out 2000 additional Monte Carlo simulations, half of them following a pure Poisson process, the other half following a pure SFP. Adding these pure cases to those of the mixed cases at different percentage compositions described previously, we calculated the test p-values

and based on the usual chi-square distribution with one degree of freedom. Namely, with

being the cumulative distribution function of the chi-square distribution with one degree of freedom, we have

(15)

and

(16)

The p-values and of all simulated point processes, pure or mixed, can be seen in the boxplots of Figures 12 and 13, respectively. The red horizontal lines represent the 0.05 threshold. Considering initially the plots in Figure 12, the first block of 10 boxplots correspond to a pure SFP (when ) with different number of events. The p-values are practically collapsed to zero and the test will reject the null hypothesis that the process is a pure Poisson process, which is the correct decision. Indeed, this correct decision is taken in virtually all cases until . The test still correctly rejects the pure Poisson in all cases where except when the number of events is very small. Only when the or (and, therefore, it is pure Poisson process) the p-value distribution clearly shifts upward and starts accepting the null hypothesis frequently. This is exactly the expected and desired behavior for our test statistic. Figure 13 presents the behavior of and its behavior is identical to that of .

Figure 12: , the pure Poisson process.
Figure 13: , the pure SFP process.

Figure 14 shows the joint behavior of The red vertical and horizontal lines represent the level threshold. Clearly, the two tests practically never accept both null hypothesis, the pure Poisson and pure SFP processes. Either one or other pure process is accepted or else both pure processes are rejected, indicating a mixed process.

Figure 14: versus

6 Parsimonious Characterization

We used eight datasets split into three groups. The first one contains the comments on topics of several web services: the discussion forums AskMe, MetaFilter, and MetaTalk and the collaborative recommendation systems Digg and Reddit. The second group contains user communication events: e-mail exchange (Enron) and hashtag-based chat (Twitter). The fourth group is composed by user reviews and recommendations of restaurants in a collaborative platform (Yelp). In total, we analysed events.

The AskMe, MetaFilter222http://stuff.metafilter.com/infodump/- Accessed in September, 2013 and MetaTalk datasets were made available by the Metafilter Infodump Project2. The Digg333http://www.infochimps.com/datasets/diggcom-data-set - Accessed in September, 2013 dataset was temporarily available in the web and was downloaded by the authors. The Enron444https://www.cs.cmu.edu/~./enron/ - Accessed in September, 2013 data were obtained through the CALO Project (A Cognitive Assistant that Learns and Organizes) of Carnegie Mellon University. The Yelp555http://www.yelp.com/dataset_challenge - Accessed in August, 2014 data were available during the Yelp Dataset Challenge. The Reddit and Twitter datasets were collected using their respective APIs. All datasets have time scale where the unit is the second except for Yelp, which has the time scale measured in days, a more natural scale for this kind of evaluation review service.

For all databases, each RSE is a sequence of events timestamps and the event varies according to the dataset. For the Enron dataset, the RSE is associated with individual users and the events are the incoming and outgoing e-mail timestamps. For the Twitter dataset, each RSE is associated with a hashtag and the events are the tweet timestamps mentioning that hashtag. For the Yelp dataset, each RSE is associated with a venue and the events are the reviews timestamps. For all other datasets, the RSE is a discussion topic and the events are composed by comments timestamps. As verified by [39], the rate at which comments arrive has a drastic decay after the topic leaves the forum main page. The average percentage of comments made before this inflection point varies from 85% to 95% and these represents the bulk of the topic life. As a safe cutoff point, we considered the 75% of the initial flow of comments in each forum topic.

Table 1 shows the number of RSEs in each database. It also shows the average number of events by dataset, as well as the minimum and maximum number of events. We applied our classification test from Section 5 and the table shows the percentage categorized as pure Poisson process, pure SFP, or mixed process. For all datasets, the p-values and have a behavior similar to that shown in Figure 14, leading us to believe in the efficacy of our classification test to separate out the models in real databases in addition to their excellent performance in the synthetic databases.

A more visual and complete way to look at the burstiness scale is in the histograms of Figure 15. The horizontal axis shows the expected percentage of the Poisson process component in the RSE given by . The two extreme bars at the horizontal axis, at and , have areas equal to the percentage of series classified as pure SFP and as pure Poisson, respectively. The middle bars represent the RSE classified as mixed point processes. AskMe, MetaFilter, MetaTalk, Reddit, Yelp have the composition where the three models, the two pure and the mixed one, appear with substantial amount. The Poisson process share of the mixed processes distributed over a large range, from close to zero to large percentages, reflecting the wide variety of series behavior.

Figure 15: in each dataset.
Base # of # of events per series Hypothesis Test Bivariate Gaussian
series Min Avg Max Mix PP SFP
AskMe 490 74 99.30 699 333 (67.96%) 43 (8.78%) 114 (23.26%) -6.91 ( 0.98 ) 4.77 ( 0.39 ) -0.40
Digg 974 39 90.41 296 353 (36.24%) 2 (0.21%) 619 (63.55%) -8.08 ( 0.83 ) 4.4 ( 0.38 ) -0.11
Enron 145 55 1,541.35 14258 106 (73.1%) 0 (0%) 39 (26.9%) -11.56 ( 0.62 ) 8.18 ( 0.57 ) -0.28
MetaFilter 8243 72 131.10 4148 5625 (68.24%) 1279 (15.52%) 1339 (16.24%) -6.76 ( 0.94 ) 4.78 ( 0.37 ) -0.39
MetaTalk 2460 73 151.92 2714 1691 (68.74%) 271 (11.02%) 498 (20.24%) -7.31 ( 1.08 ) 5.23 ( 0.55 ) -0.57
Reddit 102 37 535.43 4706 58 (56.86%) 21 (20.59%) 23 (22.55%) -6.12 ( 1.1 ) 3.26 ( 3.48 ) -0.85
Twitter 17088 50 969.68 8564 15913 (93.12%) 72 (0.42%) 1103 (6.46%) -10.01 ( 0.31 ) 6.82 ( 5.26 ) -0.66
Yelp 1929 50 127.84 1646 774 (40.12%) 927 (48.06%) 228 (11.82%) -3.79 ( 0.38 ) 2.28 ( 0.38 ) -0.22
Table 1: Description of the databases: number of series of events; minimum, average, and maximum number of events; classification test results; gaussian fit parameters

Figure 16 shows the estimated pairs

of each events stream classified as a mixed process. The logarithmic scale provides the correct scale to fit the asymptotic bivariate Gaussian distribution of the maximum likelihood estimator. Each point represents a RSE and they are colored according to the database name. Except by the Twitter dataset, all others have their estimator distribution approximately fitted by a bivariate Gaussian distribution with marginal mean, variance and correlation given in Table

1.

The correlation is negative in all databases, meaning that a large value of the Poisson process parameter (that is, a large ) tends to be followed by small values of the SFP component (that is, a small , implying a short median inter-event time between the SFP events). Not only each database has a negative correlation between the mixture parameters, they also occupies a distinct region along a NorthWest-SouthEast gradient. Starting from the upper left corner, we have the Enron email cloud, exhibiting a low average and a jointly high . Descending the gradient, we find the less compactly shaped Twitter point could. In the region we find the foruns (AskMe, MetaFilter, MetaTalk). Slightly shifted to the left and further below (within the region), we find the two collaborative recommendation systems (Reddit and Digg). Finally, in the lower right corner, we have the Yelp random series estimates.

In this way, our model has been able to spread out the different databases in the space composed by the two component processes parameters. Different communication services lives in a distinctive location in this mathematical geography.

Figure 16: Estimates for all events streams from the eight databases (logarithmic scale). A few anomalous time series are highlighted as large dots.

7 Goodness of Fit

Figure 17 shows a goodness of fit statistic for the RSE classified as a mixed process. After obtaining the and estimates, we disentangled the two processes using the Monte Carlo simulation procedure described in Section 4.2. The separated out events were then used to calculate the statistics shown in the two histograms. The plot in Figure 16(a) is the determination coefficient

from the linear regression of the events cumulative number

versus , which should be approximately a straight line under the Poisson process process.

In Figure 16(b) we show the from a linear regression with the SFP-labelled events. We take the inter-event times sample and build the empirical cumulative distribution function

leading to the odds-ratio function

. This function should be approximately a straight line if the SFP process hypothesis is valid (more details in [37]).

Indeed, the two histograms of Figure 17 show very high concentration of the statistics close to the maximum value of 1 for the collection of RSE. This provides evidence that our disentangling procedure of the mixed process into two components is able to create two processes that fir the characteristics of a Poisson process and a SFP process.

(a) PP
(b) SFP
Figure 17: Goodness of fit of mixed series

8 Comparison with Hawkes process

An alternative process to our model, is the Hawkes point process [7], which has conditional intensity defined by

(17)

where is called the kernel function. As our BuSca model, the Hawkes process allows the successive events to interact with each other. However, there are two important differences between them. In Hawkes, every single event excites the process increasing the chance of additional events immediately after, while only some of these incoming events induces process excitement in BuSca. Depending on the value of , only a fraction of the events lead to an increase on . The second difference is the need to specify a functional form for the kernel , common choices being with an exponential decay or a power law decay.

We compared our model with the alternative Hawkes process using the 31431 events time series of all databases we analysed. We fitted both,BuSca and the Hawkes process, at each time series separately by maximum likelihood, and evaluated the resulting Akaike information criterion. The Hawkes process was fitted with the exponential kernel implying on a three parameter model while BuSca requires only two. The result was: only four out of 31431 RSEs had their Hawkes AIC smaller. Therefore, for practically all time series we considered, our model fits better the data although requiring less parameters.

To understand better its relative failure, we looked at the of the fitted Hawkes model for each time series (see [28] for the calculation in the Hawkess model). We studied the distribution conditioned on the value of . Hawkes is able to fit reasonably well only when , that is, when the series of events is Poisson dominated. It has mixed results for , and a poor fit when , exactly when the bursty periods are more prevalent.

9 Applications

In this section, we described two applications based on our proposed model: the detection of series of events that should be considered as anomalous, given the typical statistical behavior of the population of series in a database; the detection of periods of bursts, when the series has a cascade of events that is due mainly to the SFP component.

9.1 Anomaly detection

Within a given a database, we saw empirically that the maximum likelihood estimator , of the series of events follows approximately a bivariate Gaussian distribution. This can be justified by a Bayesian-type argument. Within a database, assume that the true parameters (in the log scale) follow a bivariate Gaussian distribution. That is, each particular series has its own and specific parameter value

. Conditional on this parameter vector, we know that the maximum likelihood estimator

has an asymptotic distribution that is also a bivariate Gaussian. Therefore, unconditionally, the vector for the set of series of each database should exhibit a Gaussian behavior, as indeed we see in Figure 16.

Let be the -th individual of the -th database. We assume that the from different individuals within a given database are i.i.d. bivariate random vectors following the bivariate Gaussian distribution where is the vector of expected values in database and is its covariance matrix. To find the anomalous time series of events in database , we used the Mahalanobis distance between each -th individual and the typical value given by:

(18)

Standard probability calculation establishes that has a chi-square distribution when is indeed selected from the bivariate Gaussian . This provides a direct score for an anomalous point time series. If its estimated vector has the Mahalanobis distance , the time series is considered anomalous. The threshold is the -percentile of a chi-square distribution , defined as . We adopted .

One additional issue in using (18) is the unknown values for the expected vector and the covariance matrix

. We used robust estimates for these unknown parameters. Since we anticipate anomalous points among the sample, and we do not want them unduly affecting the estimates, we used estimation procedures that are robust to the presence of outliers. Specifically, we used the empirical medians in each database to estimate

and the median absolute deviation to estimate each marginal standard deviation. For the correlation parameter, we substitute each robust marginal mean and standard deviation by its robust counterpart, called correlation median estimator (see

[31]).

In Figure 16 we highlighted 5 anomalous points found by our procedure to illustrate its usefulness. The first one correspond to the topic #219940 of the AskMe dataset, which has a very low value for , compared to the other topics in this forum. This topic was initiated by a post about a lost dog and his owner asking for help. Figure 18 shows the cumulative number of events up to time , measured in days. Consistent with the standard behavior in this forum, there is an initial burst of events with users suggesting ways to locate the pet or sympathizing with the pet owner. This is followed by a Poissonian period of events arising at a constant rate. Occasional bursts of lower intensity are still present but eventually the topic reaches a very low rate. Typically, about days, the topic would be considered dead and we would not see any additional activity. Further discussion from this point on would likely start a new topic. However, this was not what happened here. At , the time marked by the vertical red line, the long inactivity period is broken by a post from the pet owner mentioning that he received new and promising information about the dog whereabouts. Once again, he receives a cascade of suggestions and supporting messages. Before this flow of events decreases substantially, he posts at time that the dog has been finally found. This is marked by the blue vertical line and it caused a new cascade of events congratulating the owners by the good news. This topic is anomalous with respect to the others in the AskMe database because, in fact, it contains three successive typical topics considered as a single one. The long inactivity period, in which the topic was practically dead, led to a with a very low value, reflecting mathematically the anomaly in the content we just described.

Figure 18: Representation of topic , considered an anomaly in the AskMe dataset.

In the MetaTalk database, the time series # 18067 and # 21900 deal with a unusual topic in this platform. They are reminders of the deadline for posting in an semestral event among the users and this prompted them to justify their lateness or make a comment about the event. This event is called MeFiSwap and it a way found by the users to share their favorite playlists. The first one occurred in the summer of 2009 and the second one in the winter of 2012. Being reminders, they do not add content, but refer to and promote other forum posts. What is anomalous in these two time series is the time they took to develop: 22.6 days (# 18067) and 10.9 (# 21900), while the average topic takes about 1.9 days. The pattern within their enlarged time scale is the same as the rest of the database. The behavior of these two cases is closer to the Twitter population, as can be seen in Figure 16.

The Twitter time series # 1088 was pinpointed as an anomaly due to his relatively large value of the Poisson component . It offered free tickets for certain cultural event. To qualify for the tickets, users should post something using the hashtag iwantisatickets. This triggered a cascade of associated posts that kept an approximately constant rate while it lasted.

Our final anomaly example is the time series # 65232 from MetaFilter. It was considered inappropriate and deleted from the forum by a moderator. The topic author suggested that grocery shopping should be exclusively a women’s chore because his wife had discovered many deals he was unable to find out. The subject was considered irrelevant and quickly prompted many criticisms among the users.

9.2 Burst detection and identification

Another practical application developed in this paper is the detection and identification of burst periods in each individual time series. This allows us to: (i) infer if a given topic is experiencing a quiet or a burst period; (ii) identify potential subtopics associated with distinct bursts; (iii) help understanding the causes of bursts. The main idea is that a period with essentially no SFP activity should have the cumulative number of events increasing at a constant and minimum slope approximately equal to . Periods with SFP activity would quickly increase this slope to some value . We explore this intuitive idea by segmenting optimally the series.

We explain our method using Figure 18(a), showing the history of the Twitter hashtag #Yankees! (the red line) spanning the regular and postseason periods in 2009. We repeat several times the decomposition of the series of events into the two pure processes, Poisson and SFP, as explained in Section 4.2. We select the best fitting one by considering a max-min statistics, the maximum over replications of the minimum of the two fits, the pure Poisson (blue line) and the pure SFP (green line). During the regular season, with posts coming essentially from the more enthusiastic fans, the behavior is completely dominated by a homogeneous Poisson process.

Considering only the pure SFP events, we run the Segmented Least Squares algorithm from [20]. For each potential segment, we fit a linear regression and obtain the linear regression minimum sum of squares. A score measure for the segmentation is the sum over the segments of these minimum sum of squares. The best segmentation minimizes the score measure. Figure 18(b) shows the optimal segmentation of the #Yankees! SFP series.

This algorithm has complexity on the number of points and therefore it is not efficient for large time series. Hence, we reduced the number of SFP events by breaking them into 200 blocks or less. To avoid these blocks to be concentrated on burst periods, we mix two split strategies. We selected 100 split points by taking the successive -th percentiles (that is, the event that leaves of the events below it). We also divided the time segments into 100 equal length segments and took the closest event to each division point. These 200 points constitute the segments endpoints.

The decision of which segment can be called a burst depends on the specific application. We say that each time segment found by the algorithm has a power , defined as the ratio between the observed number of SFP events and the expected number of Poisson events in the same segment. Therefore, the total number of points in a segment is approximately equal to . The large the value of , the more intense the burst in that segment. For illustrative purposes, we take as large enough to determine if the segment contains a SFP cascade. In these cases, the segment has twice as much events as expected solely by the Poisson process.

Figure 18(c) returns to the original time series superimposing the segments division and adding the main New York Yankees games during playoffs (the American League Division Series(ALDS), League Championship Series(LCS), and the World Series(WS)). The first segment found by the algorithm starts during the October 7 week, at the first postseason games against Minnesota Twins (red crosses in Figure 18(c)). In this first playoff segment, we have , or four times the standard regular behavior. The LCS games start an augmented burst until November 4, when the New York Yankees defeated the Philadelphia Phillies in a final game. This last game generated a very short burst marked by the blue diamond, with . After this explosive period, the series resume to the usual standard behavior.

(a) Cumulative series versus (in percentage to their total).
(b) The SFP points used to carry out the apotimal segmentation.
(c) with the optimal segments and the major games during playoffs.
Figure 19: Steps applying the burst detection algorithm for the individual hashtag #Yankees! in Twitter.

Analysing the

statistical distribution for each database separately, we found that they are well fitted by a heavy tailed probability distribution, a finding that is consistent with previous studies of cascade events (

[2]).

10 Discussion and conclusions

In this paper, we proposed the Burstiness Scale (BuSca) model, which views each random series of events (RSEs) as a mix of two independent process: a Poissonian and a self-exciting one. We presented and validated a particular and highly parsimonious case of BuSca, where the Poissonian process is given by a homogeneous Poisson process (PP) and the self-exciting process is given by a Self-Feeding Process (SFP) [36]. When constructed in this way, BuSca is highly parsimonious, requiring only two parameters to characterize RSEs, one for the PP and another for the SFP. We validated our approach by analyzing eight diverse and large datasets containing real random series of events seen in Twitter, Yelp, e-mail conversations, Digg, and online forums. We also proposed a method that uses the BuSca model to disentangle events related to routine and constant behavior (Poissonian) from bursty and trendy ones (self-exciting). Moreover, from the two parameters of BuSca, we can calculate the burstiness scale parameter , which represents how much of the RSE is due to bursty and viral effects. We showed that these two parameters, together with our proposed burstiness scale, is a highly parsimonious way to accurately characterize random series of events, which and, consequently, may leverage several applications, such as monitoring systems, anomaly detection methods, flow predictors, among others.

References

  • [1] Lars Backstrom, Jon Kleinberg, Lillian Lee, and Cristian Danescu-Niculescu-Mizil. Characterizing and curating conversation threads. In Proceedings of the sixth ACM international conference on Web search and data mining - WSDM ’13, page 13, New York, New York, USA, 2013. ACM Press.
  • [2] Albert-László Barabási. The origin of bursts and heavy tails in human dynamics. Nature, 435(7039):207–211, may 2005.
  • [3] Christian Bauckhage, Fabian Hadiji, and Kristian Kersting. How Viral Are Viral Videos?, 2015.
  • [4] Tom Broxton, Yannet Interian, Jon Vaver, and Mirjam Wattenhofer. Catching a viral video. Journal of Intelligent Information Systems, 40(2):241–259, apr 2013.
  • [5] Jin Cao, William S. Cleveland, Dong Lin, and Don X. Sun. Internet Traffic Tends Toward Poisson and Independent as the Load Increases. pages 83–109. 2003.
  • [6] Daejin Choi, Jinyoung Han, Taejoong Chung, Yong-Yeol Ahn, Byung-Gon Chun, and Ted Taekyoung Kwon. Characterizing Conversation Patterns in Reddit. In Proceedings of the 2015 ACM on Conference on Online Social Networks - COSN ’15, pages 233–243, New York, New York, USA, 2015. ACM Press.
  • [7] R. Crane and D. Sornette. Robust dynamic classes revealed by measuring the response function of a social system. Proceedings of the National Academy of Sciences, 105(41):15649–15653, oct 2008.
  • [8] Nan Du, Mehrdad Farajtabar, Amr Ahmed, Alexander J. Smola, and Le Song. Dirichlet-Hawkes Processes with Applications to Clustering Continuous-Time Document Streams. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’15, pages 219–228, New York, New York, USA, 2015. ACM Press.
  • [9] Jean-Pierre Eckmann, Elisha Moses, and Danilo Sergi. Entropy of dialogues creates coherent structures in e-mail traffic. Proceedings of the National Academy of Sciences of the United States of America, 101(40):14333–14337, 2004.
  • [10] Alceu Ferraz Costa, Yuto Yamaguchi, Agma Juci Machado Traina, Caetano Traina, and Christos Faloutsos. RSC. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’15, pages 269–278, New York, New York, USA, 2015. ACM Press.
  • [11] Flavio Figueiredo, Jussara M. Almeida, Yasuko Matsubara, Bruno Ribeiro, and Christos Faloutsos. Revisit Behavior in Social Media: The Phoenix-R Model and Discoveries. pages 386–401. 2014.
  • [12] Vladimir Filimonov, Spencer Wheatley, and Didier Sornette. Effective measure of endogeneity for the Autoregressive Conditional Duration point processes via mapping to the self-excited Hawkes process. Communications in Nonlinear Science and Numerical Simulation, 22(1-3):23–37, may 2015.
  • [13] Shuai Gao, Jun Ma, and Zhumin Chen. Modeling and Predicting Retweeting Dynamics on Microblogging Platforms. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining - WSDM ’15, pages 107–116, New York, New York, USA, feb 2015. ACM Press.
  • [14] Scott Garriss, Michael Kaminsky, Michael J Freedman, Brad Karp, David Mazières, and Haifeng Yu. Re: Reliable Email. In Proceedings of the Third USENIX/ACM Symposium on Networked System Design and Implementation (NSDI’06), pages 297–310, 2006.
  • [15] Vicenç Gómez, Hilbert J. Kappen, Nelly Litvak, and Andreas Kaltenbrunner. A likelihood-based framework for the analysis of discussion threads. World Wide Web, 16(5-6):645–675, nov 2013.
  • [16] Frank A Haight. Handbook of the Poisson distribution [by] Frank A. Haight. Wiley New York,, 1967.
  • [17] Hao Jiang and Constantinos Dovrolis. Why is the Internet Traffic Bursty in Short Time Scales? In Proceedings of the 2005 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS’05), pages 241–252, 2005.
  • [18] T. Karagiannis, M. Molle, M. Faloutsos, and A. Broido. A nonstationary poisson view of internet traffic. In IEEE INFOCOM 2004, volume 3, pages 1558–1569. IEEE.
  • [19] Jon Kleinberg. Bursty and hierarchical structure in streams. In Proceedings of the eighth ACM SIGKDD, KDD ’02, pages 91–101, New York, NY, USA, 2002. ACM.
  • [20] Jon Kleinberg and Eva Tardos. Algorithm design. Pearson Education, 2006.
  • [21] Janette Lehmann, Bruno Gonçalves, José J. Ramasco, and Ciro Cattuto. Dynamical classes of collective attention in twitter. In Proceedings of the 21st international conference on World Wide Web - WWW ’12, page 251, New York, New York, USA, 2012. ACM Press.
  • [22] Kristina Lerman and Tad Hogg. Using a model of social dynamics to predict popularity of news. In Proceedings of the 19th international conference on World wide web - WWW ’10, page 621, New York, New York, USA, 2010. ACM Press.
  • [23] R. D. Malmgren, D. B. Stouffer, A. E. Motter, and L. A. N. Amaral. A Poissonian explanation for heavy tails in e-mail communication. Proceedings of the National Academy of Sciences, 105(47):18153–18158, nov 2008.
  • [24] R. Dean Malmgren, Jake M. Hofman, Luis A.N. Amaral, and Duncan J. Watts. Characterizing individual communication patterns. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’09, page 607, New York, New York, USA, 2009. ACM Press.
  • [25] Naoki Masuda, Taro Takaguchi, Nobuo Sato, and Kazuo Yano. Self-Exciting Point Process Modeling of Conversation Event Sequences. In Understanding Complex Systems, chapter Temporal N, pages 245–264. 2013.
  • [26] Yasuko Matsubara, Yasushi Sakurai, B. Aditya Prakash, Lei Li, and Christos Faloutsos. Rise and fall patterns of information diffusion. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’12, page 6, New York, New York, USA, 2012. ACM Press.
  • [27] Joao G Oliveira and Albert-Laszlo Barabasi. Human dynamics: Darwin and Einstein correspondence patterns. Nature, 437(7063):1251, 2005.
  • [28] Roger Peng. Multi-dimensional point process models in r. Journal of Statistical Software, 8(1):1–27, 2003.
  • [29] Julio Cesar Louzada Pinto, Tijani Chahed, and Eitan Altman. Trend detection in social networks using Hawkes processes. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015 - ASONAM ’15, pages 1441–1448, New York, New York, USA, 2015. ACM Press.
  • [30] Daniel M. Romero, Brendan Meeder, and Jon Kleinberg. Differences in the mechanics of information diffusion across topics. In Proceedings of the 20th international conference on World wide web - WWW ’11, page 695, New York, New York, USA, 2011. ACM Press.
  • [31] G. Shevlyakov and P. Smirnov. Robust Estimation of the Correlation Coefficient: An Attempt of Survey. Austrian Journal of Statistics, 40(1):147–156, 2011.
  • [32] Stefan Siersdorfer, Sergiu Chelaru, Jose San Pedro, Ismail Sengor Altingovde, and Wolfgang Nejdl. Analyzing and Mining Comments and Comment Ratings on the Social Web. ACM Transactions on the Web, 8(3):1–39, jul 2014.
  • [33] Donald L. Snyder and Michael I. Miller. Random Point Processes in Time and Space. Springer Texts in Electrical Engineering. Springer New York, New York, NY, 1991.
  • [34] Nemanja Spasojevic, Zhisheng Li, Adithya Rao, and Prantik Bhattacharyya. When-To-Post on Social Networks. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’15, pages 2127–2136, New York, New York, USA, 2015. ACM Press.
  • [35] David Vallet, Shlomo Berkovsky, Sebastien Ardon, Anirban Mahanti, and Mohamed Ali Kafaar. Characterizing and Predicting Viral-and-Popular Video Content. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM ’15, pages 1591–1600, New York, NY, USA, 2015. ACM.
  • [36] Pedro O S Vaz de Melo, Christos Faloutsos, Renato Assuncao, and Antonio A F Loureiro. The Self-Feeding Process: A Unifying Model for Communication Dynamics in the Web. In WWW ’13: 22nd International World Wide Web Conference, 2013.
  • [37] Pedro Olmo Stancioli Vaz de Melo, Christos Faloutsos, Renato Assunção, Rodrigo Alves, and Antonio A.F. Loureiro. Universal and Distinct Properties of Communication Dynamics: How to Generate Realistic Inter-event Times. ACM Transactions on Knowledge Discovery in Data, 2015.
  • [38] Alexei Vazquez, Joao Gama Oliveira, Zoltan Dezso, Kwang-Il Goh, Imre Kondor, and Albert-Lazlo Barabasi. Modeling bursts and heavy tails in human dynamics. Phys Rev E Stat Nonlin Soft Matter Phys, 73:36127, 2006.
  • [39] Chunyan Wang, Mao Ye, and Bernardo A. Huberman. From user comments to on-line conversations. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’12, page 244, New York, New York, USA, 2012. ACM Press.
  • [40] Senzhang Wang, Zhao Yan, Xia Hu, Philip S Yu, and Zhoujun Li. Burst Time Prediction in Cascades, 2015.
  • [41] Jaewon Yang and Jure Leskovec. Patterns of temporal variation in online media. In Proceedings of the fourth ACM international conference on Web search and data mining - WSDM ’11, page 177, New York, New York, USA, 2011. ACM Press.
  • [42] Shuang-hong Yang and Hongyuan Zha. Mixture of Mutually Exciting Processes for Viral Diffusion. In Sanjoy Dasgupta and David Mcallester, editors, Proceedings of the 30th International Conference on Machine Learning (ICML-13), volume 28, pages 1–9. JMLR Workshop and Conference Proceedings, 2013.
  • [43] Honglin Yu, Lexing Xie, and Scott Sanner. The Lifecyle of a Youtube Video: Phases, Content and Popularity, 2015.
  • [44] Qingyuan Zhao, Murat A. Erdogdu, Hera Y. He, Anand Rajaraman, and Jure Leskovec. SEISMIC: A Self-Exciting Point Process Model for Predicting Tweet Popularity. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’15, pages 1513–1522, New York, New York, USA, 2015. ACM Press.