Key areas of network science have successfully adopted and refined statistical modeling ideas and methods from other fields, and stimulated new statistical developments of broader use. The general area of dynamic networks has generated broad interest in questions of modeling changes in time of network (and graph) structures from statistics and machine learning perspectives; we comment on a number of recent contributions(including Hanneke et al., 2010; Richard et al., 2014; Newman, 2004, 2018; Holme & Saramäki, 2013; Holme, 2015; Giraitis et al., 2016; Bianchi et al., 2018), and relevant Bayesian approaches (including Kim et al., 2018; Sarkar et al., 2007; Xing et al., 2010; Xu & Hero, 2014) in Section 2.1.
Our interest is in studies of time-varying patterns of integer counts of traffic flowing between nodes in a given network (as well as into and out of the network). This topic has broad application but has not previously been well-addressed in statistical terms. Past work on so-called network tomography (e.g. Tebaldi & West, 1998) and physical traffic flow forecasting (e.g. Tebaldi et al., 2002; Queen & Albers, 2009; Anacleto et al., 2013a, b) are relevant, but our dynamic network flows contexts present challenges that require new modeling approaches. We exploit perspectives of Bayesian time series modeling and analysis to advance practicable methodology for characterizing and monitoring dynamic network flows. The motivating context is in internet traffic in e-commerce web-sites, where we present aspects of a case study based on the new modeling and analysis framework.
Increasing access to streaming traffic data on networks drives interest in models to quantify inherent levels of variation in flows of traffic between network nodes and into/out of the network from other sources. There are two aspects of this. The first is natural and unpredictable variation, the second is that of stochastic but sustained patterns in underlying trends over time. State-space models directly address this: they couple observation (noise) models for unpredictable variation with latent state processes representing the structure and relationships among nodes—and patterns of time-variation in these relationships. Understanding and appropriately characterizing normal patterns of variation in traffic flows is necessary to coherently address questions of monitoring flows for potential anomalies, and then to intervene or otherwise respond to interesting inferred changes.
Key technical challenges are to develop real-time/sequential analysis that is computationally efficient and scalable with network size and data sampling rates. We address these questions from a new perspective of Bayesian time series analysis, introducing to the network science literature approaches based on dynamic generalized linear models (DGLMs) of proven utility in other fields for many years (e.g. Migon & Harrison, 1985; West et al., 1985; West & Harrison, 1997, Chapter 14). The class of DGLMs integrates non-Gaussian sampling models of traditional generalized linear models with state-space evolution models for time series. The subclass of models based on conditional Poisson sampling distributions is especially relevant to dynamic network flow studies as it its to other problems involving monitoring and forecasting multivariate systems of time series of counts (e.g. Berry & West, 2019; Berry et al., 2018).
We extend basic univariate DGLMs to the multivariate dynamic system generated by large-scale networks. We do this using the modeling concept of decouple/recouple originally introduced for multivariate time series in financial and economic applications (Gruber & West, 2016, 2017), and that has been recently extended for network data with simple network flow models (Chen et al., 2018). The latter reference represents our starting point here. The decouple/recouple idea is combined with the use of Bayesian model emulation, which maps inferences from collated sets of individual node-node flow models to an integrated multivariate network. This model enables us to explore dynamics in network-level activity intensity, node-specific flow dynamics, and node-node interactions over time with information from individual flow levels.
This new model class is quite general in admitting a range of possible state-space models for link-specific flow evolutions. In our case study, we utilize one of the most important special cases of (local) linear growth models for adapting to unpredictable changes in trends underlying flow patterns. Other applications will involve special cases customized to context– such as dynamic regressions on known predictors, seasonal structures, and other Bayesian state-space structures used in time series analysis (e.g. West & Harrison, 1997; Prado & West, 2010).
Our case study concerns internet traffic in e-commerce, where the flow data are counts of visitors moving between nodes that are clusters of web pages corresponding to meaningful categories in a commercial web site. The network example has over 56,000 node pairs and data for 288 time points within one day. Online advertisers are interested in many statistical issues related to traffic flow and site-segment content. The field has become quite sophisticated, and many methods have been employed to explore related problems, for instance, complex recommender systems (Koren et al., 2009)2008), and text mining (Soriano et al., 2013)
, etc. However, basic questions of statistical modeling to characterize, monitor and potentially understand dynamics in traffic across site-segments have not received as much attention as they warrant. In particular, there is potential commercial value as well as inherently interesting methodological concern in identifying and adapting to fluctuations in the changes of a site’s popularity on short time scales. There is related interest in understanding the interactions between sites with respect to traffic, and in model-based monitoring for subtle anomaly detection in sub-networks. In such applications, one important focus is “ads pacing” which relates to the speed with which advertisers deplete the budget assigned to a specific ad. A good understanding of traffic dynamics can help control the delivery of ads and budget spending, and thus improve the matching between demand-side and supply-side to maximize revenue.
We extend the recent work of Chen et al. (2018)
in terms of network context and goals. We introduce the rich class of DGLMs for latent Poisson rates, with opportunities to substantially advance methodology for time-varying flow characteristics on larger networks. In addition to extending the statistical modeling methodology, analysis of historical patterns in flow rates is enabled with extensions of existing Bayesian time series analysis with DGLMs to include retrospective posterior sampling of state vectors over time. This underlies the ability to map inferences on the time-varying parameters of so-called dynamic gravity models to inference on global network flow rates, dynamics in node-specific rates, and dynamics in node-node relationships. The application in this paper involves a large network that highlights the utility of the new models, along with the technical advances in modeling and computation for larger problems.
Section 3 describes the network context, basic statistical setting and notation. It outlines the concept of decoupling multivariate flows into those on univariate node-node pairs, as well as dynamic parameter mappings for inference on node-specific and node-node interactions. Section 4 details the class of conditionally Poisson DGLMs in a general setting, and then focuses on the specific example of local linear growth models for latent flow rates underlying network counts. Linked to these models, the Appendix has three components. Appendix A summarizes technical details for mapping of DGLM inferences to the context of dynamic gravity models. Appendix B summarizes the standard on-line learning algorithm for DGLMs and a novel methodological extension required for approximate posterior simulation of full time trajectories of latent state vectors. Appendix C provides further examples of inferred dynamics in node-node interaction effects, illustrating potential insights for online advertising.
Our case study involves data from the Fox News web site, where flows represent individuals browsing web pages. Section 5 introduces the data and context, develops initial DGLM analysis and discusses some comparisons with the prior approach in Chen et al. (2018). Section 6 discusses the mapping of posterior distributions from the DGLM analysis results to the time-varying parameters of highly structured dynamic gravity models (DGMs), the latter then defining inferences on node-specific and node-node interaction effects and how they vary over time. Several highlights are discussed using specific nodes from the news web site. We give some concluding comments in Section 7.
2.1 Related Work
Dynamic network analysis is an increasingly active research area and includes a multitude of network contexts and analysis goals. In statistics, for example, Hanneke et al. (2010) proposed a family of statistical models extending traditional exponential random graph models (ERGMs) to a dynamic setting, with interest in potential problems of testing for changes as well as classification. Statistical and machine learning approaches have evolved to address the problem of link prediction in time-evolving graphs (e.g. Richard et al., 2014). Perspectives from different areas of the physical sciences have included approaches based on graph theory (e.g. Newman, 2004, 2018; Holme & Saramäki, 2013; Holme, 2015), while dynamic network models are increasingly visible in applied studies in areas including biology (e.g. Uetz et al., 2000; Giot et al., 2003), econometrics and marketing (e.g. Giraitis et al., 2016; Bianchi et al., 2018).
Various Bayesian statistical models have also been developed for dynamic network studies with state-space models(Kim et al., 2018)
. State-space models represent data with coupled observation (noise, natural random variation) components and latent process (underlying state variables with stochastic but sustained patterns over time). Conditionally linear, Gaussian models are amenable to analytic computation using Kalman filtering-style analysis, and more extensive analysis using broader classes of Bayesian state-state space models(e.g. West & Harrison, 1997)
. However, many/most problems of network inference in dynamic contexts involve non-Gaussian data structures and non-linear effects, so one needs stronger computational methods such as Markov chain Monte Carlo(Xu & Hero, 2014; Hoff, 2011). For example, Sarkar et al. (2007) map the dynamic co-occurrence data to dynamic embeddings in low-dimension Euclidean space, while Xing et al. (2010) and Xu & Hero (2014) extend well-known stochastic block models to study dynamic network tomography. More closely allied to some of our main interests here are past works on the so-called network tomography problem (e.g. Tebaldi & West, 1998) and state-space models for physical traffic flow forecasting (e.g. Tebaldi et al., 2002; Queen & Albers, 2009; Anacleto et al., 2013a, b). Our work presents a broader approach to such studies by exploiting rich classes of Bayesian state-space models to characterize and analyse complex, time-varying flows on networks.
This paper extends the recent work of Chen et al. (2018) in terms of network context and goals. Using an example of a small network (20 nodes), that work utilized a relatively simple smoothing model for latent time-varying Poisson rates on node-node pairs, and introduced the key idea of decouple/recouple to enable scaling. While based on a state-space approach, the simple smoothing model is simply not able to adequately capture trends or abrupt changes in network flows, and the model structure is not flexible enough to incorporate other node/flow features. Those challenges are addressed by DGLMs whose utility has been proven in other fields for many years (e.g. Migon & Harrison, 1985; West et al., 1985; West & Harrison, 1997, Chapter 14).
2.2 Network Structure and Notation
Consider a closed network defined on nodes with counts of traffic flowing between pairs of nodes observed sequentially over equally-spaced time . This defines a highly interdependent, multivariate count time series. The counts represent units who individually enter the network at one of the nodes at some specific time, transit to other nodes, may stay at a node for a period of time, and may exit the network at some time point. The case study involves IP addresses (indicating individuals) browsing a web site comprised disjoint of sets of web categories (e.g., Arts & Entertainment, Weather) so that these sets are the nodes. In this and other applications, an additional node indexed by 0 is needed to represent flows into, or out from, nodes in the core network. At each time point , let be the number of occupants of node , and let be the flow count from node to , including the in-flows and out-flows , as shown in Fig. 1.
3 Statistical Structure
3.1 Dynamic Poisson and Multinomial Flow Distributions
The natural class of dynamic models have hidden Markovian structures with latent rates defining transitions between nodes over time, the rates themselves being time-varying. We adopt conditionally independent Poisson models () for in-flows to the network coupled with conditionally independent multinomials () for flows from each node, all with time-varying parameters. That is, for , and network nodes ,
where is the vector of outflows to all nodes from node at time In the notation here,
denotes a Poisson distribution with mean, and denotes a multinomial distribution with trials and
cell probabilities in the vector. The defining parameters of these distributions are time values of underlying latent processes: is the time-varying rate process governing flows into node from the external node 0; is the vector of time-varying transition probabilities from node to all other nodes with indicating departure from the network.
Modeling flexibility and computational efficiency are key needs in large scale dynamic network analysis. In our case study, one goal of the end-user is to optimize online advertisement placement by analyzing browsing patterns, and these decisions must be made in less than a second, demanding fast and effective analysis. The theory underlying multinomial models allows us to address this using decoupling of flow transitions to node-node pairs, enabling use of univariate DGLMs in parallel, followed by theoretical recoupling for exact inference on the sets of multinomial probabilities. In the decoupling step, network flows originating from nodes
within the network are implicitly conditionally independent Poisson random variables, denoted by
where: (i) the are latent Poisson rates governing the outflows from node to all other nodes at time and (ii) the occupancy ratio provides appropriate scaling of outflows from node as its level of occupancy counts changes over time.
Decoupling allows individual flows to be updated independently, to achieve fast parallel computing per unit of observation time. One aspect of recoupling is to then directly revert to the fundamental time-varying multinomial transition probabilities of eqn. (1) via subject to summing to 1 over Given any set of the simulated from posterior distributions, this trivial computation provides inferences on the multinomial transition probability processes.
3.2 Mapping to Dynamic Gravity Model
As for decoupling, besides mapping to multinomial transitions as discussed in Section 2.3, the second aspect is to map inferences on the to those of a separate dynamic gravity model (DGM (e.g. West, 1994; Sen & Smith, 1995; Congdon, 2000)). This mapping enables evaluation of flow patterns at various levels of the network: (i) overall network level, (ii) the level of each individual node, and (iii) for all pairs of nodes. The details follow Chen et al. (2018) are summarized here, with some additional technical detail in Appendix A. A dynamic gravity model (DGM) represents as a product of four terms, at each and for each pair of nodes, viz
for each within-network node and all over all Here the latent rates are mapped to: (i) a network-level flow rate process ; (ii) a multiplicative origin effect process for each node (iii) a multiplicative destination effect process for each node and (iv) an affinity process , a dynamic interaction term representing the directional attractiveness of node as a destination for flows from node (i) relative to the contributions of baseline and main effects.
Given posterior samples of the over nodes and time, we can directly map to the DGM parameters for more incisive inference on node-specific and node-node interactions over time. In this sense, the flexible class of decoupled/recoupled DGLMs can be used as a Bayesian emulator for inference in the DGM. Technical details of the mapping are given in Appendix A.
4 Dynamic Generalized Linear Models (DGLMs)
DGLMs are generalized linear models (McCullough & Nelder, 1989; West, 1985) with time-varying parameters defined by state-space evolutions of regression vectors. Time series observations are conditionally drawn from a sampling model in the exponential family, and the natural parameter of the distribution is regressed on a time-evolving state-vector. These models build on dynamic linear models and are central in Bayesian work in applied time series with non-Gaussian data (West & Harrison, 1997; Prado & West, 2010). We focus here on the special example of conditionally Poisson models for both the network inflow count time series and the decoupled within-network flow series, i.e., for all node pairs and all times for which the sampling models are with for inflows from outside the network. DGLMs are created via state-space models for the latent rate processes .
Ignoring the node indices for clarity, consider a single Poisson rate process . Suppose that
is defined via an underlying linear, state-space, Markov model in which
with the following elements: (i) is a known regression vector of constants and/or known values of predictors at time ; (ii) is the corresponding dynamic regression parameter vector, known as the state-vector, at ; (iii) is a known state-evolution or transition matrix; (iv) is a random innovation vector representing stochastic changes to the state at time ; (v) the are independent over time, and the notation indicates
is zero mean and has known variance matrix
Model specification depends on context, of course, and there are widely used subclasses in which and take specific forms(e.g. Prado & West, 2010, Chapter 4). Some examples include DGLMs when includes values of known covariates (predictors), intervention indicators, and constants representing groups and design variables, in which cases the corresponding entries in are dynamic regression coefficients. Natural evolution models then have corresponding rows of as zero but for the implied column index, so that the model indicates a random walk time evolution for those parameters. Relevant to many applications are examples where both and
are constant with specific forms chosen to define local smoothing and interpolation, such as in modelsand defined by
In model , the latent state vector consists of the current level of the latent process and the time change in level (the discrete gradient, or “growth”) term . This is a local linear growth model (LLGM) and one of the most widely used DGLMs both alone or as a component of more elaborate models. The model defines local linear interpolation of time-varying trends that are otherwise regarded as unpredictable, and is key to retrospective smoothing of patterns in time series. Model is a more elaborate local quadratic model in which the third element of the state vector represents time-varying changes in gradient. More complicated local smoothing can be defined by higher-order polynomial DGLMs with obvious extension (West & Harrison, 1997, Chapters 7,10). The case study of this paper adopts the class of LLGMs defined by as in model above.
Summary details– including algorithms for implementation– of Bayesian analysis of general DGLMs is given in Appendix B. This includes details of sequential learning, i.e., forward filtering to process data as it arrives and sequentially update prior-to-posterior summary information for the state vectors over time. At any time this enables inference on the current state and forecasts of coming data. This online analysis is most relevant to sequential learning and monitoring of flows in many applications. Then, based on an observed time series of flows over a period of time , key interests are addressed by retrospective analysis that examines inferences on historical trajectories of state vectors, and any functions of them of interest. Bayesian analysis here is best addressed using simulation of posteriors over historical trajectories, and the implied posteriors for past evolution in patterns of substantively interesting parameters such as those of dynamic gravity models than can be implied. Full technical and algorithmic details of this are summarized in Appendix B. One key element of model specification is the extent and nature of time-variation in the state vector as defined by the variance matrices These are specified using the standard discount factors method; see Chapter 6 of West & Harrison (1997) and Section 4.3.6 of Prado & West (2010), and the additional technical details in Appendix B.
5 LLGM Analysis of Fox News Flow Data
5.1 Fox News Flow Data
The case study concerns flow data recording individual visitors (in terms of IP addresses) to well-defined nodes of the Fox News website. Our sample of data here concerns traffic on September 17th, 2015 (a Thursday) segmented to IP addresses linked to visitors from only the Eastern Daylight Savings time zone. The website is structured by the Adex Category, a partition derived from text mining the webpage content and widely used in online advertising for webpage analysis. There are pre-defined categories, including main categories and different levels of sub-categories. The main categories are Arts & Entertainment, Computers & Electronics, Finance, Games, Home & Garden, Business & Industrial, Internet & Telecom, People & Society, News, Shopping, Law & Government, Sports, Books & Literature, Real Estate, Beauty & Fitness, Health, Autos & Vehicles, Hobbies & Leisure, Pets & Animals, Travel, Food & Drink, Science, Online Communities, Reference, Jobs & Education and World Localities. Although there can be web page content that combines different categories, the Adex Category tool enforces a strict partition. Exploratory study found little cluster structure among subcategories sharing the same main category.
Data are aggregated to five-minute intervals, suggested by stability of exploratory analysis results across temporal levels of data aggregation. This defines a time series with time points having the structure described in Section 3
. At each time interval, over the directed network with nodes classified by Adex Category, the data include counts of transitions of visitors between each pair, incoming flows from outside the Fox News website to each node, as well as the total number of people visiting each node. There are no relevant additional covariates available, so the analysis focuses wholly on temporal trends in network, node-specific and node-node interactions as evidenced through analysis of flexible DGLMs that allow and adapt to changes over time. This is done using the special case of local linear growth DGLMS, i.e., the LLGM framework.
Most people spend no more than five minutes on a single web page (Jansen et al., 2007). Therefore, visitors who spend more than five minutes in a node are deemed inactive and handled as if they have left the website. Also, user information is unavailable before and after September 17, so the inactivity rule means the first and last five minute intervals are eliminated from the time series, which now has length . Then, there are some categories with little or no traffic during the entire day, so only those categories with sufficient data are considered in the analysis. By applying a threshold of for the total traffic across all time periods, out of an initial superset of categories are left for analysis.
5.2 Some Initial DGLM Analysis Summaries
We focus on individual flows, and evaluate performance by comparing the accuracy of the one-step-ahead predictions against predictions from the Bayesian Dynamic Flow Model (BDFM) of Chen et al. (2018). In our LLGM analysis, for each individual flow, the latent state vector has two components: the rate parameter on the log scale and the local linear growth term . The prior on
is chosen with mean equal to the point estimate based on a simple average of data in the five minutes prior to the beginning of the time series, and the prior mean foris . The initial prior variance matrix has zero covariances and diagonal entries . The levels of prior variance represent a relatively diffuse initial prior that allows for swift adaptation to the data over the initial few time points. The discount factor for all reported analyses is set as , for all node pairs. This level of discounting encourages smoothness but also allows the model to flexibly adapt to changes, and variations (that have been explored) might marginally improve local descriptions for some node pairs but are secondary to the interests and emphases here.
Decoupling allows us to apply LLGM simultaneously to all of the flows. The LLGM analysis of flows staying at the category Arts & Entertainment is used to illustrate the model performance. Fig. 3 shows the forward filtering results for both the rate parameter and the local linear growth term, while Fig. 3 shows the results as smoothed by backward sampling. In general, both the sequential and retrospective analyses capture patterns of change over time well, and the efficacy of retrospective analysis is highlighted. We note that the intervals representing uncertainty about trajectories are very tight, indicating a high level of precision in estimating the underlying transition processes. This is true in several of the following graphs for other model parameters.
There are periods when volatile patterns in the data challenge model adaptability. For example, in the morning (: - :) and late at night (: - :), forward filtering analysis without any intervention underestimates the swings in the rate parameter
. The sequential analysis is more sensitive to outliers. For instance, at around:, there are two data points with low counts compared to the data before and after; see the red ’+’s in Figs. 3 and 3. These two data points drive the rate down in the forward filtering. That said, the retrospective analysis is able to resolve these two issues by using information from the later in the series. Looking back, retrospective posterior analysis provides smoother and more accurate inference on the rate parameter.
Forward filtering and backward smoothing for the local linear growth term give insight into how trends vary during the day. For example, at first the Poisson rate for people staying in Arts & Entertainment decreases at : - :, but its decrease slows, reaching the minimum at around : a.m. Afterwards, the rate increases rapidly and reaches the first peak of the day at around : a.m., then maintains a steady, high level until around :. The then reaches its second peak at around :
, and after that, the activity level declines rapidly until midnight.
5.3 Comparison with BDFM
Some comparison of LLGM and the simple smoothing model (Chen et al., 2018), referred to as a Bayesian dynamic flow model (BDFM), was made, and one example concerns counts staying at category Arts & Entertainment. The discount factor controlling adaptability is for both. Summaries here focus on forward filtering and one-step ahead forecasting for comparison. Both models perform well when the trend is stable—between : - :, the one-step ahead forecasts by both models agree closely with the true data. However, BDFM tends to underestimate the rate when the trend is rising and to overestimate when the trend is declining, as during : - : and : - : respectively. In contrast, LLGM still provides good point-wise prediction. Across nearly all flows, LLGM outperforms BDFM in terms of one-step ahead forecast accuracy, illustrated in Fig. 4.
6 Model Mapping Analysis
6.1 Daily Fox News Flows Example
We now recouple, mapping the retrospective results for the log rate parameters to the DGM. The results provide insight into four aspects of flow dynamics: (i) the baseline process that characterizes general activity intensity; (ii) the origin effects that relate to activity of outgoing flows from node ; (iii) the destination effects that relate to the attractiveness for incoming flows to node ; and (iv) the directed pairwise affinity effects , for interactions that impact the rate functions governing flows from nodes to .
6.1.1 Baseline level
We apply the DGM decomposition to the Fox News data on the network defined by Adex Category for September th, . As expected, the baseline activity level reflects human routine—high traffic during the day and early evening, and low traffic late at night, as shown in Fig. 5.
The day starts at midnight when the overall mean intensity is . As users go offline, this overall activity level decreases, reaching a minimum at around :. Then the mean increases until around :, when the trend becomes flat. During the day (: - :), the website maintains a relatively high level of activity with three small bumps at around :, : and :, which are typical times for work breaks. There is a slight decreasing trend from : to :, presumably as people travel home. After dinner time, the trend increases to a peak at around :, and then declines as people retire.
6.1.2 Origin and destination effects
For most categories, the origin and destination trends are similar so we focus here on the former. Trajectories of origin effects exhibit several general patterns, varying by Adex Category. Three examples of each appear in Fig. 6.
First, categories such as Arts & Entertainment, News, and Shopping show trends similar to the overall activity level . The origin effect increases to a high level during the early morning, is steady during the day and early evening, and then declines. These categories are among the most popular in the network.
Second, categories such as Health, Beauty & Fitness, and News/Weather show high activity during the day, but drop to a low level in the evening and at night. This is reasonable for weather, since many people want to know the forecast before leaving for work. It also seems reasonable for the other categories, which are oriented towards a narrow focus that is not entertainment or relaxation.
Third, categories such as Games, Online Communities/Social Networks, and Food & Drink/Food/Baked Goods show a different trend in origin effect—a pattern that increases from about : a.m. to late evening, peaks at around :, and exhibits local peaks at other times. Topics on games and social networks pertain to relaxation, which is a reasonable evening activity. The food category has two peaks, in the morning and the late afternoon, which may reflect people looking at recipes in the morning to plan grocery purchases, and then later when preparing dinner.
6.1.3 Affinity effects
Affinity effects capture interaction between pairs of categories. Pairs with strong affinity may indicate that people interested in one tend to be interested in the other, which would be potentially valuable in computational advertising. There are four kinds of affinity effects: (i) staying at a certain category ; (ii) entering the network (iii) leaving the network l; and (iv) moving between two different categories . We discuss these separately. For all four kinds of affinities, we show representative trend patterns, with more extensive examples summarized in Appendix C.
A high self-affinity implies that users tend to stay at that category. A trend that shows times of day when people linger is useful to advertisers since it suggests readers have more leisure time and thus could be tempted to click on ads.
A representative trend pattern features high activity during the business day, and then lower level in the evening. Such categories include Finance/Investing and Computers & Electronics/Software (Figs. 8 and 8, respectively). The self-affinity of Finance/Investing has three peaks: one around :, one around :, and one around :. Computers & Electronics/Software is interesting since, for most categories, self-affinity drops a bit after : a.m. and is not very high at noon, while the affinity of Computers & Electronics/Software increases over the morning and peaks at noon. This trend indicates people spent more time reading related contents compared with other times of the day, and should inform ad buy decisions during those peak times.
A category with high entering affinity draws users from outside the Fox News website. Ads shown on such categories may be more cost-effective. The probability of users entering the network at node is a measure of a category’s overall popularity.
Entering affinities show interesting patterns identified in trajectories. One such pattern peaks in the early morning; e.g., flows into News/Weather and News/Politics. The peak for both is around :, as in Figs. 10 and 10. For News/Weather the insight is obvious. For News/Politics, Sept. 17 is the day after a US national political debate, which may drive extra interest. That said, this peak pattern is sustained on other weekdays, indicating that many people access news sites in the morning. Advertisers should generally avoid placing ads at this time, since people are not in shopping mode.
Categories with high exiting affinities tend to be the last stop for users navigating the Fox News website. Such categories are the last chance to show an ad, and if grocery store checkout lines are any guide, a good opportunity for impulse purchases.
One interesting trend increases over the morning, peaks at about : and then stays stable with high intensity until evening. Often, these categories are the only categories visited by a user. Examples include Arts & Entertainment and Food & Drink/Cooking & Recipes (Figs. 12 and 12). Though the affinity intensity is stable, there is an interpretable bump during : - : for Food & Drink/Cooking & Recipes which is probably related to dinner preparation.
Distinct node pair affinities:
High levels of affinity between distinct category pairs indicate interaction, which could reasonably influence advertising strategy.
In all patterns of such affinities, the one with a bump is the kind in which we are most interested. The bump indicates that users are only active in moving between those categories in a short window, while they remain inactive during the rest of the day, and thus this short window is the best time that related ads should be displayed. Examples include Online Games to Video Games and Home & Garden to Reference/General Reference/How to DIY & Expert Content.
The affinity from Online Games to Video Games (Fig. 14) has a bump in the evening and at night (: - :) during which the average is six times higher than the usual intensity level. This strongly indicates that users who read about online games are also interested in computer and video games during this period.
The affinity from Home & Garden to Reference/General Reference/How to DIY & Expert Content (Fig. 14) has a bump around : - :. Moreover, both of the origin effect of Home & Garden and the destination effect of Reference/General Reference/How to DIY & Expert Content are low, while the affinity effect between them is large, which indicates strong interaction. Obviously, people plan home projects in the morning, and seek information on how to implement them.
7 Closing Comments
The case study demonstrates the value of this new class of dynamic network flow models on a large network. The specific special case of DGLMs adopted—the local linear growth model (LLGM)—is just one example of the broader class, but the analysis examples highlight the ability of this family to characterize and adapt to quite heterogeneous patterns of change over time in latent flow patterns. An important element of the analysis is that full Bayesian inference, based on computationally efficient retrospective sampling analysis, defines not only point estimates of trajectories of key dynamic parameters, but also uncertainties about them from posterior samples.
A critical component of the analysis is the strategy of decoupling/recoupling. This has two aspects. First, the individual univariate DGLMs for dynamic Poisson flows are learned by forward filtering updates at each time step, and then for all within-network nodes these are theoretically recoupled to define inferences on the dynamic transition probability vectors in the inherent conditional multinomial distributions governing flows, so inferring node dependencies. The decoupling/recoupling strategy also provides an ideal structure to perform parallel computation, enabling scalable and efficient analysis for increasingly large dynamic networks. Using a computer with cores, the computational demands for a series of length on a network with nodes scales only as . A 2018 MATLAB implementation running on a standard laptop (2.3GHz cpu, 16Gb memory) took minutes to run our analysis with , .
The second aspect of recoupling analysis is the use of the set of DGLM-based models as an emulator for a multivariate dynamic gravity model that is explicitly parametrized in terms of network-wide, node-specific and node pair interaction effects, all potentially time-varying. The Fox News web study shows a number of examples of the use of this mapping to uncover—again with full Bayesian posterior uncertainty measures—interpretable patterns in time trajectories of node main effects (impacting both incoming and outgoing traffic) and node-node pair interactions (in terms of affinities of one node for another that impact traffic flows between them). We have noted the potential for such information to be exploited by on-line advertisers in the case study, and formal inferences will be of interest in other applications.
The DGM mapping also enables investigation of network grouping structures over time by clustering nodes using the dynamic node-node pair affinities, with potential for deeper future study. With the overall network intensity and node main effects removed by DGM, the affinity between a pair of nodes is an inherently interesting measure of the “closeness” of nodes; at one level, it can be regarded as a coherent statistical summary of node-node relationship and their patterns over time (and much preferred to any raw data summary). We note, in addition, that the node-specific main effects for inflows can be regarded as statistical quantities of node importance or centrality; inflows and outflows together reflect the scale of importance of a node in terms of numbers and flow intensities with neighbors in the network. Traditional summary network topology measures may be of interest in other contexts, but for dynamic network traffic analysis these model-derived quantities are foremost, and their inferred trajectories over time are of primary interest.
The case study exploits and highlights the adaptability and utility of the local trend LLGM class of models as a key driver of the overall decouple/recouple and emulation analysis. This is, of course, just one special subclass of the full class of DGLMs, and other forms are applicable in other areas. Future studies with available covariate information might include, for example, internet traffic studies with known interventions, dynamic traffic flow with geographical and structural information, or brain network data with known or hypothesized connectivity information. Such studies can be expected to exploit more general DGLMs that include node-specific covariates or dummy variables representing intervention effects (e.g., for known ad placements in e-commerce examples, or network structure changes in others). This work offers benefits in other applications through the flexibility to customize the DGLMs to the context. The overall model framework and approach needs only customization of details in the specification of the state-space elements(which will in some extensions be specific to pairs of nodes as well as time-varying), as the analysis described here and illustrated in the case study applies generally to the full DGLM class.
Appendix A: Recouple Mapping to Dynamic Gravity Model
The mapping from the DGLM flow model to DGM parameter processes of Section 3.2 follows developments in Chen et al. (2018), summarized as follows. We work on logged DGM parameters , , and To ensure identification and a one-one mapping between the models, the traditional zero-sum constraint is adopted for the main effects and interactions at each time. Then, given the full set of values in the DGLM flow model, define for each at each time The recoupling step is then implemented by computing DGM parameters as below. In the notation, subscript indicates summation over the relevant index or For each
compute where ; then,
for each compute for ; then,
for each compute the destination node main effect for ; finally,
for each and compute the affinity (interaction) effect where
This is applied to all simulated values from the posterior analysis under the DGLMs to create implied posteriors for the DGM parameter processes.
Appendix B: Bayesian Analysis of Poisson DGLMs
Analysis is based on sequential Bayesian computation that combines variational Bayes approximation with linear Bayes updates (West & Harrison, 1997; Hartigan, 1969; Goldstein, 1976), along with retrospective sampling of posteriors for state vector trajectories, extending earlier algorithms for DGLM analysis.
Sequential analysis: Forward filtering and learning
At time , specify a prior mean vector and variance matrix for the pre-initial state vector, denoted by Then, over every future time point the evolution over to , prediction of from time and posterior update based on observing
follows the standard evolve/predict/update cycle of Bayesian state-space model analyses. In DGLMs, we use two approximations in the cycle. First, as the evolution noise distribution is specified only in terms of first and second order moments, the modeler is free to constrain implied state and predictive distributions to chosen forms; DGLMs use the variational Bayes concept to constraint to conjugate forms enabling fast and efficient computation, as well as defining predictive distributions of contextually appropriate (for count time series) negative binomial forms. This is coupled with the use of decision-theoretic linear Bayes approximations to feed back data information in the prior-posterior update (filtering) step at each time step, appropriately conditioning the mean vector and variance matrix for the state vector as new data is processed. Details are summarized below.
At time , given all the previous data and information the mean vector and variance matrix of the posterior for are available as .
By the state evolution equation, the implied time prior for has moments , where and .
This implies that has prior mean and variance given by and respectively.
The implied prior for the latent Poisson rate is constrained to a conjugate gamma form based on the above information—a variational Bayes decision and constraint. That is, the modeling choice is made to specify with defining parameters consistent with the prior information, i.e., consistent with the moment constraints about on Matching these moments to the gamma prior implies that are given as solutions to the equations and respectively, where is the digamma function and is the trigamma function. These equations are easily solved numerically (most efficiently using the Newton-Raphson method) to give the values of
Forecasting from time the conjugate Poisson/gamma structure implies a negative binomial predictive distribution
On observing at time the implied posterior for the latent Poisson rate is the conjugate form (where is the relevant occupancy correction factor). For the natural parameter this implies posterior moments given by
and these are trivially calculated.
Using linear Bayes decision theory arguments, the posterior mean vector and variance matrix for the state vector are conditioned on the new information via the predictor-corrector forms that adjust the prior moments based on forecast accuracy. Specifically, the time to posterior update of moments required to complete the time filtering steps are given by
where is the adaptive coefficient vector
Discount specification of evolution variance matrices
Specification of the evolution variance matrix at each time uses the standard single-discount factor approach in which . This corresponds to the time posterior variance matrix evolving to the prior variance matrix That is, following the deterministic component (defined by ) of the state evolution, the stochastic innovation term in the evolution increases uncertainties about the state by “discounting” historical information at a rate defined by The discount factor is a tuning parameter to be chosen. See, for example, West & Harrison (1997, Chapter 6), Prado & West (2010, Section 4.3.6).
The above analysis provides for sequential learning, i.e., forward filtering to process data as it arrives and sequentially update prior and posterior mean and variance matrices for the state vector over time. At any time this enables inference on the current state and forecasts of coming data. For most network studies this is most relevant to online learning and monitoring of flows. Then, having processed data up to any time
, a key interest is in looking back over time to update inferences on the historical trajectories of state vectors, and any functions of them of interest. This is called retrospective analysis and can be best addressed in terms of simulation of posterior distributions over the full past history of states. This is enabled using DGLM extensions of the standard “backward sampling” algorithm for conditionally Gaussian, dynamic linear models. That is, exploiting the retrospective extrapolation of posterior mean vectors and variance matrices, and adopting the variational Bayes concept again to constrain the implicit backward innovations to multivariate normal distributions with moments defined by the linear Bayes retrospection, we impute and simulate historicaltrajectories of sets of states by recursing backwards in time as follows.
At time , sample the approximating normal posterior
Recurse back over times at each stage sampling the variational Bayes normal approximation to
At save the sampled trajectory of states.
Repeat to generate a random sample of trajectories.
From a Monte Carlo sample of trajectories, we can then directly map to any functions of the state vectors for inference. Centrally, this includes mapping to the sampled Poisson rate parameters and then to the network-wide, node-specific and node-node interaction parameter processes of the dynamic gravity models.
In terms of implied computation, the retrospective theory simplifies considerably in the single discount DGLM. Summary computations for simulation at time of the state vector from the implied conditional normal for has mean vector and variance matrix given by simplified versions of the general expressions in dynamic linear models (e.g. Prado & West, 2010, Sections 4.3.5 and 4.3.6). In detail, the conditional moments are simply
and the implied approximate normal distribution is trivially simulated.
Appendix C: More Examples of Affinity Effects
Another pattern in self-affinities that is common across nodes resembles the overall activity ; category Arts & Entertainment is a good example (Fig. 16). Levels are high during the day and evening (: - :) with a large bump around : p.m. We also see a pattern that increases throughout the day and peaks at night; the category Arts & Entertainment/TV & Video is one of the few examples (Fig. 16).
Beyond the examples in Section 6.1, another pattern in entering affinities, exemplified by the categories Health/Pharmacy/Drugs & Medications and Finance/Investing (Figs. 18 and 18), has several bumps during the workday (: - :), and is relatively low otherwise; this again has implications for selecting ad content.
A further pattern in entering affinities has high intensity in the morning and then is stable during the day. The entering affinity of Shopping is a good example—see Fig. 20. But things may be complex—perhaps someone on this website at a non-standard time is more apt to purchase than a casual browser.
One additional pattern in entering affinities is notable for multiple pronounced peaks. Beauty & Fitness/Fashion & Style (Fig. 20) has three peaks, at about : - :, at about : - :, and at about : - :. September 17 is during the New York Fashion Week, which may lead to atypical behavior.
Other patterns have been detected in trajectories of exiting affinities. One has a peak, but is otherwise low, as in the exiting affinity for News/Weather (Fig. 22). Clearly, people are visiting this site only to learn about the weather forecast, and then leave for work.
A third pattern in trajectories of exiting affinities increases from : in the morning until the evening, indicating that these categories are increasingly losing users throughout the day. Such categories include Arts & Entertainment/Music & Audio (Fig. 22).
Distinct node pair affinities
The affinity from News to News/Local News (Fig. 24) has three peaks in the day. The first is around :, the second around noon, and last around :. The peaks indicate that people who are interested in national news also check the local news, and that the timing coincides with leisure.
The last example is the affinity from News/Technology News to Shopping (Fig. 24). This peaks around :, which suggests the users who have read technology news start to explore technology purchase, and should be a clear signal for ad display.
- Anacleto et al. (2013a) Anacleto, O., Queen, C., & Albers, C. J. (2013a). Forecasting multivariate road traffic flows using Bayesian dynamic graphical models, splines and others traffic variables. Australian and New Zealand Journal of Statistics, 55, 69–86.
Anacleto et al. (2013b)
Anacleto, O., Queen, C., & Albers, C. J. (2013b).
Multivariate forecasting of road traffic flows in the presence of heteroscedasticity and measurement errors.Journal of the Royal Statistical Society (Series C: Applied Statistics), 62, 251–270.
- Berry & West (2019) Berry, L. R., & West, M. (2019). Bayesian forecasting of many count-valued time series. Journal of Business and Economic Statistics, to appear. arXiv:1805.05232.
- Berry et al. (2018) Berry, L. R., Helman, P., & West, M. (2018). Probabilistic forecasting of heterogeneous consumer transaction-sales time series. International journal of forecasting, invited revise & resubmit. arXiv:1808.04698.
- Bianchi et al. (2018) Bianchi, D., Billio, M., Casarin, R., & Guidolin, M. (2018). Modeling systemic risk with Markov switching graphical SUR models. Journal of Econometrics.
- Chen et al. (2018) Chen, X., Irie, K., Banks, D., Haslinger, R., Thomas, J., & West, M. (2018). Scalable Bayesian modeling, monitoring and analysis of dynamic network flow data. Journal of the American Statistical Association, 113, 519–533.
- Congdon (2000) Congdon, P. (2000). A Bayesian approach to prediction using the gravity model, with an application to patient flow modeling. Geographical Analysis, 32, 205–224.
- Giot et al. (2003) Giot, L., Bader, J. S., Brouwer, C., Chaudhuri, A., Kuang, B., Li, Y., Hao, Y. L., Ooi, C. E., Godwin, B., & Vitols, E. et al. (2003). A protein interaction map of Drosophila Melanogaster. Science, 302, 1727–1736.
- Giraitis et al. (2016) Giraitis, L., Kapetanios, G., Wetherilt, A., & Žikeš, F. (2016). Estimating the dynamics and persistence of financial networks, with an application to the Sterling money market. Journal of Applied Econometrics, 31, 58–84.
- Goldstein (1976) Goldstein, M. (1976). Bayesian analysis of regression problems. Biometrika, 63, 51–58.
- Gruber & West (2016) Gruber, L. F., & West, M. (2016). GPU-accelerated Bayesian learning in simultaneous graphical dynamic linear models. Bayesian Analysis, 11, 125–149.
- Gruber & West (2017) Gruber, L. F., & West, M. (2017). Bayesian forecasting and scalable multivariate volatility analysis using simultaneous graphical dynamic linear models. Econometrics and Statistics, 3, 3–22.
- Hanneke et al. (2010) Hanneke, S., Fu, W., & Xing, E. P. (2010). Discrete temporal models of social networks. Electronic Journal of Statistics, 4, 585–605.
- Hartigan (1969) Hartigan, J. A. (1969). Linear Bayesian methods. Journal of the Royal Statistical Society (Series B: Methodological), 31, 446–454.
- Hoff (2011) Hoff, P. D. (2011). Hierarchical multilinear models for multiway data. Computational Statistics and Data Analysis, 55, 530–543.
- Holme (2015) Holme, P. (2015). Modern temporal network theory: A colloquium. European Physical Journal B, 88, 234.
- Holme & Saramäki (2013) Holme, P., & Saramäki, J. (2013). Temporal Networks. Springer.
- Jansen et al. (2007) Jansen, B. J., Spink, A., & Kathuria, V. (2007). How to define searching sessions on web search engines. Pages 92–109 of: Nasraoui, O., Spiliopoulou, M., Srivastava, J., Mobasher, B., & Masand, B. (eds), Advances in Web Mining and Web Usage Analysis: Eighth International Workshop on Knowledge Discovery on the Web, WebKDD 2006. Lecture Notes in Computer Science. Springer.
- Kim et al. (2018) Kim, B., Lee, K. H., Xue, L., & Niu, X. (2018). A review of dynamic network models with latent variables. Statistics Surveys, 12, 105–135.
- Koren et al. (2009) Koren, R., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, 8, 30–37.
- McCullough & Nelder (1989) McCullough, P., & Nelder, J. A. (1989). Generalized Linear Models. Chapman & Hall.
- Migon & Harrison (1985) Migon, H. S., & Harrison, P. J. (1985). An application of non-linear Bayesian forecasting to television advertising. Pages 681–696 of: Bernardo, J. M., DeGroot, M. H., Lindley, D. V., & Smith, A. F. M. (eds), Bayesian Statistics 2. North-Holland, Amsterdam, and Valencia University Press.
- Newman (2004) Newman, M. E. J. (2004). Analysis of weighted networks. Physical Review E, 70, 056131.
- Newman (2018) Newman, M. E. J. (2018). Network structure from rich but noisy data. Nature Physics, 14, 542.
- Pang & Lee (2008) Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2, 1–135.
- Prado & West (2010) Prado, R., & West, M. (2010). Time Series: Modeling, Computation and Inference. Chapman & Hall/CRC Press.
Queen & Albers (2009)
Queen, C. M., & Albers, C. J. (2009).
Intervention and causality: Forecasting traffic flows using a dynamic Bayesian network.Journal of the American Statistical Association, 104, 669–681.
- Richard et al. (2014) Richard, E., Gaïffas, S., & Vayatis, N. (2014). Link prediction in graphs with autoregressive features. Journal of Machine Learning Research, 15, 565–593.
- Sarkar et al. (2007) Sarkar, P., Siddiqi, S. M., & Gordon, G. J. (2007). A latent space approach to dynamic embedding of co-occurrence data. Pages 420–427 of: Artificial Intelligence and Statistics.
- Sen & Smith (1995) Sen, A., & Smith, T. (1995). Gravity Models of Spatial Interaction Behavior. Springer.
- Soriano et al. (2013) Soriano, J., Au, T., & Banks, D. (2013). Text mining in computational advertising. Statistical Analysis and Data Mining, 6, 273–285.
- Tebaldi & West (1998) Tebaldi, C., & West, M. (1998). Bayesian inference on network traffic using link count data. Journal of the American Statistical Association, 93, 557–573.
- Tebaldi et al. (2002) Tebaldi, C., West, M., & Karr, A. F. (2002). Statistical analyses of freeway traffic flows. Journal of Forecasting, 21, 39–68.
- Uetz et al. (2000) Uetz, P., Giot, L., Cagney, G., Mansfield, T. A., Judson, R. S., Knight, J. R., Lockshon, D., Narayan, V., Srinivasan, M., & Pochart, P. et al. (2000). A comprehensive analysis of protein-protein interactions in saccharomyces cerevisiae. Nature, 403, 623.
- West (1985) West, M. (1985). Generalized linear models: scale parameters, outlier accommodation and prior distributions (with discussion). Pages 531–558 of: Bernardo, J. M., DeGroot, M. H., Lindley, D. V., & Smith, A. F. M. (eds), Bayesian Statistics 2. North-Holland, Amsterdam, and Valencia University Press.
- West (1994) West, M. (1994). Statistical inference for gravity models in transportation flow forecasting. Discussion Paper 94-20, Duke University, and Technical Report #60, National Institute of Statistical Sciences.
- West & Harrison (1997) West, M., & Harrison, P. J. (1997). Bayesian Forecasting and Dynamic Models. 2nd edn. Springer.
- West et al. (1985) West, M., Harrison, P. J., & Migon, H. S. (1985). Dynamic generalized linear models and Bayesian forecasting (with discussion). Journal of the American Statistical Association, 80, 73–83.
- Xing et al. (2010) Xing, E. P., Fu, W., & Song, L. (2010). A state-space mixed membership block model for dynamic network tomography. Annals of Applied Statistics, 4, 535–566.
- Xu & Hero (2014) Xu, K. S., & Hero, A. O. (2014). Dynamic stochastic block models for time-evolving social networks. IEEE Journal of Selected Topics in Signal Processing, 8, 552–562.