1 Introduction
Dynamic network analysis is increasingly used in complex application domains ranging from social networks (Facebook network evolution [Leskovec and Rok Sosič2014]) to biological networks (proteinprotein interaction [Shih and Parthasarathy2012]), from political science (United Nations General Assembly voting network [Voeten2012]) to communication networks (Enron network [Klimt and Yang2004]). Such dynamic networks are often represented using the snapshot model. Under this model, every network snapshot (represented by a graph) is defined at a logical timestamp. Two questions of fundamental importance are – (i) how does a network evolve? (ii) when does a network change significantly so as to arise suspicion that something fundamentally different is happening?
Various generative models [Peixoto and Rosvall2015, Zhang et al.2016] have been proposed to address question (i)  to explain the evolution of a network. They study network evolution under certain generative models [Erdős and Rényi1960, Karrer and Newman2011]. In reality, the generative model itself might change, as addressed in question (ii) above. Existing work [Akoglu et al.2014, Ranshous et al.2015] use complex methods to detect such changes. One drawback of those delicate methods is that they are timeconsuming, and hence often not scalable (in terms of both network size and number of snapshots). We seek to find an efficient and effective solution that can scale up both with network size and with number of snapshots.
In this paper, we present a simple and efficient algorithm based on likelihood maximization to detect change points in dynamic networks under the snapshot model. We demonstrate the utility of our algorithm on both synthetic and real world networks drawn from political science (congressional voting, UN voting), and show that it outperforms two recent approaches (DeltaCon[Koutra et al.2016], and LetoChange[Peel and Clauset2015]) in terms of both quality and efficiency. Our work has the following contributions:

Our approach is general purpose – it can accommodate various snapshot generative models (see Table 1).

We model network evolution as a first order Markov process and consequently our algorithm accounts for the temporal dependency while computing the dissimilarity between snapshots.

Our algorithm is efficient and has constant memory overhead that can be tuned by a user controlled parameter.
We extensively evaluate our approach on synthetic as well as real world networks and show that our approach is extremely efficient (both in performance and quality).
2 Related Work
Ranshous et al. [Ranshous et al.2015], and Akoglu et al. [Akoglu et al.2014]
recently survey network anomaly detection. Our change point detection problem is similar to Type 4, the “Event and Change Detection”, of the former: given a network sequence, a dissimilarity scoring function, and a threshold, a change is flagged if the dissimilarity of two consecutive snapshots is above the threshold. We differ in that we assume there is a latent generation model governing the network dynamics, and we are trying to detect the change in the latent space, while they did not explicitly mention the latent generation model. Moreover, we consider the temporal dependency across the snapshots while no work in the surveys accounted for temporal dependency.
DeltaCon [Koutra et al.2016] uses a graph similaritybased [Berlingerio et al.2012] approach to detect change points in dynamic networks. It derives the features of a snapshot based on sociological theories. And the feature similarity of each consecutive snapshot pair is calculated. That work is model agnostic (has no assumption on the generation model of networks), and is the stateoftheart in terms of efficiency. We compare our algorithm against this.
Moreno and Neville [Moreno and Neville2013], Bridges et al. [Bridges et al.2015] and Peel and Clauset [Peel and Clauset2015] develop network hypothesis testing based approaches. The advantage is that one can get a pvalue of the test, which quantifies the confidence of the conclusion. However, these approaches have two shortcomings: firstly, they need to assume a specific generation model of the networks (mKPGM, GBTER and GHRG respectively); secondly, they are extremely slow, mostly due to the bootstrapping for pvalue calculation. La Fond et al.’s work [La Fond et al.2014] can also generate a pvalue. It is tested against DeltaCon without reporting running time and efficiency concern is also mentioned in the paper. These algorithms will not work in our setting where the detection is done real time under bounded memory constraints. We compare our model agnostic algorithm against [Peel and Clauset2015].
The DAPPER heuristic
[Caceres and BergerWolf2013]proposes a similar edge probability estimator as ours. However, it does not consider the temporal dependency of snapshots. Moreover, it focuses on temporal scale determination while ours focuses on change point detection. Loglisci et al.
[Loglisci et al.2015] study change point detection on relational network using rulebased analysis. Our approach uses (hidden) parameter estimation instead of semantic rule to infer the structure. Li et al. [Li et al.2016] propose an online algorithm, and consider temporal dependency. The problem they study is different from ours in that they study information diffusion on network with fixed structure and use continuous time. A recent work by Zhang et al. [Zhang et al.2016]also studies the dynamic network in a Markov chain setting. They focus on community detection while we focus on change point detection.
3 Problem Formulation
This paper studies how to detect the times at which the fundamental evolution mechanism of a dynamic network changes. We assume that there is some unknown underlying model that governs the generative process. Our change point detection algorithm is agnostic to this model. We assume that the observed network snapshots are samples that depend on some generative model and the previous snapshot. Networks have fluctuation across snapshots even when the generative model stays unchanged. Only when the generative model changes do we consider it a fundamental change. We represent the evolutionary process as a Markov Network (Figure 1).
Model  Edge probability  Explanation 

Erdős–Rényi (ER)  : edge probability  
Chung–Lu (CL) 
: weight of node ([Pfeiffer III et al.2012]);
: edge density 

Stochastic Block Model (SBM) 
: community assignment of node
: probability of edges between communities and 

SBMCL  notation as above  
BTER 
space 
Intracommunity edge probability follows ER,
intercommunity CL; [Seshadhri et al.2012] is the indicator function 
In Figure 1, is the network generation model at time . It is a triad , where is the continuity parameter at time , Type specifies the model, while represents the model parameters (Table 1 consists of some generative models we experiment on). is the network (graph) observable at time . We assume the number of vertices in is fixed to be for all snapshots (the union of all nodes is used when there is node addition/deletion, as in [Peel and Clauset2015]), so each has possible configurations, and is the total number of snapshots we observe. As per Figure 1 the configuration of the network at time , , depends on the generation model at time , (unobserved) and the network configuration at time , (observed). Hence the networks in the observed sequence are samples from a conditional distribution (samples are not independent). The continuity rate parameter controls the fraction of edges and nonedges that are retained from the previous snapshot, . The network at time is assumed to be generated in the following way: for each dyad, keep the connection status from time with probability , and with probability , resample the connection according to the generation model at time . Consequently, the smaller is, the more overlap between two consecutive snapshots there is. Note that two consecutive network configurations may differ substantially if , even though the underlying generation model may be the same. Moreover, the changes of the generation model are assumed to be rare across the time span ( is a rare event).
Notation  Explanation 

network size, in terms of the number of nodes  
number of snapshots  
time stamp,  
set of all change points  
(unknown) generative model at time stamp  
snapshot at time stamp  
continuity rate (at time )  
window size, or number of snapshots in a window  
a window of size ending at time  
step size of the sliding windows  
number of windows:  
connection status of dyad  
averaged number of edges in each snapshot:  
connection probability of dyad (at time )  
number of dyads to be sampled and tracked  
number of flips from 0 to 1 of dyad during the period of interest  
,  
number of disconnected occurences of dyad in the period of interest 
Problem Definition Our goal is to efficiently find a set such that , that is, to efficiently find all the time points at which the network generation model is different from the previous time point.
4 Methodology
Given the graphical formulation of the problem, exact inference is impossible since we do not know the underlying generative model, and our observations are stochastic. However, even without prior knowledge of the generative model, we can still design an approximate inference technique based on MCMC sampling theory.
The framework is straightforward, as mentioned in Section 2
: we first extract a “feature vector” from each snapshot, then quantify the dissimilarity between consecutive snapshots, and flag out a change point when the dissimilarity score is above a threshold. We use the joint edge probability as the “feature vector” (
Section 4.1), exploit KolmogorovSmirnov statistic, KullbackLeibler divergence and Euclidean distance for dissimilarity measure (
Section 4.2), and use a permutation test like approach to determine the threshold (Section 4.3).4.1 Edge Probability Estimation
In this subsection, we discuss how to (approximately) estimate the joint distribution of the dyads
^{1}^{1}1we refer node pairs, which may or not be linked, as dyads. We track the presence or absence of a small fixed number of dyads throughout the entire observed sequence of network snapshots. We break down the observation sequence into fixedlength windows, and for each window we infer the joint distribution of the dyads in our sample. We model each dyad to be a conditionally independent twostate Markov chain (Figure 2, we use instead of in this section for brevity) given the sequence of generative models (this conditional independence assumption is satisfied for the choices of models in Table 1). Note that even for generative processes that may result in greater dependence among dyads (such as the configuration model), in many cases such dependence will be local, and if the number of dyads sampled is small, then these dyads will be spread out enough to be considered independent. Moreover, the conditional independence assumption significantly improves computational efficiency ([Hunter et al.2012]). The marginal probabilities of these dyads can then be estimated using the observed samples within each time window.We formalize the estimation procedure below. Given a network sequence , we group the networks into sliding windows. We define to be a subsequence of consecutive observed networks ending at network , so . We use equal sized sliding windows with a step size , and we obtain a window sequence . Nonoverlapping window setting uses . In each window , we can estimate the joint edge distribution (for the selected dyads) , where indicates an undirected edge between the th dyad, and is the number of dyads tracked. For each of the models in Table 1, the joint distribution can be factorized into . (conditional independence, see method description above)
We can view a dyad across time as a two state Markov chain, and the chain length is the window size. We call a dyad across time a chain in the following text for brevity. Let , and suppose we are interested in chains.
1) Maximum Likelihood Estimator (MLE)
The joint probability of the chains is (Figure 2)
(1) 
where is the number of transitions from 0 to 1 for a chain (nonedge to edge for the dyad within the window), stands for the combinatorial coefficients independent of . and for all s. And hence the loglikelihood (omitting the coefficient ) is:
(2) 
MLE for a single chain First consider there is only one chain. Solving the zeroderivative Equations 5 and 4, leads to estimators of . And the estimators indeed lead to a negative definite Hessian, and therefore is the MLE. Hence we have
(3) 
MLE for multiple chains The MLE for multiple chains essentially involves solving a high degree polynomial, which in general does not have a closed form solution.
(4) 
(5)  
where is the continuity rate. If then all snapshots are identical, which is uninteresting, so we have in Equation 5.
Combining Equations 5 and 4, one can get a high order polynomial of , which in general does not have closedform solutions by AbelRuffini theorem. We tried solving a special case where there are two chains by Wolfram Mathematica [mat]. The solutions (of two quartic functions) turn out to be very complicated and take over 40 pages. A common way to solve such maximization problems is to employ numerical methods such as gradient descent. The drawback of such an approach is that it can be computationally expensive with hundreds of dyads and windows. Therefore, we settle for an approximation of the MLE that empirically approximates numerical values well^{2}^{2}2At significant level , two sample test shows the approximated values equal to the numerical values.. Intuitively, the estimator for should depend on all the chains, but chains that spend more time in both states and provide more information about than chains that spend most time in one state (the latter may be due to small or to a value of far from ). Since we can easily compute the MLE for for a single chain, we estimate with a weighted average of the MLEs from the individual chains, with chains that spend more time in both states being weighted more heavily. We then estimate each by the MLE for the th chain, since the chains are conditionally independent given . This results in the following estimators.
(6) 
Empirically we find the exponent works best, which means is a simple average of the corresponding to the chains with maximal value of . The continuity rate describes the temporal dependency among networks, and can help us determine a proper window size.
Drawbacks of MLE Though MLEs are consistent in general, there is no guarantee of unbiasness for these particular estimators with limited samples. Moreover, they have three random quantities (
in (6) have 3 degrees of freedom for fixed
) and hence require more samples to estimate, making it prohibitive in practice.2) Simplified Estimator
To overcome the drawbacks of MLE, we propose a simple estimator for the edge probability which is consistent and unbiased, has only one random quantity and therefore requires fewer samples. The simple estimator essentially estimates the edge frequency in each window. If we know changes happen rarely, and the process stays in equilibrium in most of time, we can show the following estimator to be consistent and unbiased in equilibrium:
(7) 
which is the proportion of snapshots in which the dyad being an edge within the window.
Proposition 1 In equilibrium, is consistent as chain length (window size) increases.
Proof.
By ergodic theorem [Givens and Hoeting2012], , where the first equation means almost sure convergence, and implies convergence in probability (estimator being consistent). ∎
Proposition 2 In equilibrium, is unbiased.
Proof.
The above propositions imply that the larger the window size the better the estimation, and that in equilibrium, the temporal dependency (continuity rate) has no impact on estimating the onset probability of a Markov chain, and hence no impact on estimating the edge probability of a snapshot.
Although MLE is close to the true value when the chain is long enough ( or longer), we do not use so large a window size in practice ( usually, no larger than ). Experiments (Figure 3) show that the simplified estimator is much better than MLE for change point detection in practice.
4.2 Distance measure
Now, we need to compare the probability distributions of edges across consecutive windows. KolmogorovSmirnov (KS) statistic and KullbackLeibler (KL) divergence are two common measures for comparing distribution. Their calculations require the enumeration of the whole state space and hence exponential to the number of variables for joint distributions. Although KS statistic is designed for univariate distribution, we can map the joint distribution, which has multivariate binary variables, to one dimension by decoding the binary vectors as an integer and use KS statistic. We bootstrap from empirical distributions of two consecutive windows respectively and use two sample KS test to quantify the difference of two distributions. We can use divideandconquer to alleviate the exponential complexity: partition the dyads into
groups, compute KL/KS dissimilarity within each small group, and record the median among all the groups as the final dissimilarity.Both of the above measures have good quality in terms of change point detection (Figure 4), but KS statistic is extremely slow (Table 4
), mostly due to the large sample bootstrap from each window. Euclidean distance, though lack of probability interpretation, has linear complexity and has reasonable quality in practice.
Order  Window Index  Type of Change 
15  The weight sequence of 1/3 of the nodes is regenerated  
30  The weight sequence of 2/3 of the nodes is regenerated  
60  Half of the communities change their (inter and intracommunity) connection rate, overall density retained  
75  All of the communities change their (inter and intracommunity) connection rate, overall density retained  
90  Half of the communities change their (inter and intra community) connection rate, overall density changed  
105  All of the communities change their (inter and intra community) connection rate, overall density changed  
135  Community assignments of all the nodes are changed 
4.3 Threshold Determination
Suppose we have windows, then we compare pairs of distributions and get difference/distance scores. How do we choose a threshold to determine at which window the network changes? We use a permutation test [Pitman1937] based approach to determine the threshold. For a desired significance level , we bootstrap from the distance scores, and use the upper quantile as the threshold.
4.4 Complexity Analysis
The algorithm is linear to the number of windows and constant to the network size for moderately large network. Only a small fraction of dyads in the network is sampled and tracked. The sampling of the dyads is only performed once at the beginning, and hence irrelevant to the number of snapshots. For each snapshot, selecting a specific set of dyads has linear cost to the number of edges. Each window is only scanned once and therefore the time cost is linear to the number of windows. Moreover, since the number of windows is linear to the total number of snapshots even in the worst case (windows are overlapping, and window step is one), the algorithm is linear to the number of snapshots. Therefore, the time complexity is , where is the averaged number of edges in each snapshot, and is the number of snapshots.
The memory cost is low, and can be viewed as constant: for each snapshot, only the information of the tracked dyads is stored; information of dyads within the same window is aggregated; dyads information in the old window is overwritten once it is compared against the new window. And the space complexity is , where is a prescribed sample size. Theoretically the sample size should be proportional to the network size for good estimation. Our experiments show that a fixed sample size (to track 250 out of G dyads ) works well on a moderately large network.
5 Experiments And Results
We did thorough evaluation of our edge probability estimation based change point detection algorithm (called EdgeMonitoring for simplicity) on synthetic and real world datasets. For the synthetic datasets, the generative process is known, and we can compute the ground truth in the form of likelihood, which is naturally a baseline choice. We also use the stateoftheart DeltaCon [Koutra et al.2016] and LetoChange [Peel and Clauset2015] as two baselines.
5.1 Synthetic Data
Data generation^{3}^{3}3Generated using SNAP[Leskovec and Rok Sosič2014] We generate a sequence of networks from a fixed generation model. The snapshots are not independent, each snapshot depends on the preceding one through the continuity parameter . For each snapshot, each edge is selected independently with probability , and if selected, the edge is again sampled from the generative model (Table 1). We introduce the change points by changing the generative model in the middle of the sequence of snapshots. Note that this change may be simply a change of parameter values for a given model (Eg. ER to ER), or a change in the model type (Eg. SBM to ER), as well. Since our algorithm makes no assumptions about model specifics, we are able to detect both kinds of changes. We only inject parameter change in the synthetic experiment since the latter change is easily detectable. Sample changes are displayed in Table 3. The likelihood of the snapshot sequence is also provided.
We ran experiments with network sizes ranging from 1k to 50k, window size to be from 10 to 100 and continuity rate to be 0.51 and 0.9. We generated a total of 5000 snapshots and sampled 250 edges uniformly at random to track. Both overlapping window () and nonoverlapping window have similar results, yet the latter is faster simply due to fewer windows. Hence we display nonoverlapping window results only. For KL and KS, edges are grouped into 25 equalsized groups. We use upper quantile as the threshold.
Results Figure 4 shows the qualitative comparison and Table 4 reports the efficiency. Figure (a)a shows the likelihood of the network drops dramatically after the generative model changes, and recovers to new equilibrium afterwards. Our EdgeMonitoring (EMEu, EMKL) approach can successfully identify all change points with 5X speed up over DeltaCon. The changes are explained in Table 3
. We can see that EMKL has the best performance: little fluctuation and perfect precision and recall. DeltaCon, though has smallest fluctuation, misses two change points. Both EMKS and EMEu have large fluctuation. The quality of EMKS heavily relies on the joint probability estimation, and we do see smaller fluctuation and higher recall for larger window size. EMEu in general has large fluctuation. EMKL has the best overall performance, in terms of both quality and time efficiency. We believe grouping together with median selection contribute to its superiority.
Model  Network Size  Window Size  EM Time^{1}  EMKS Time  DC Time (speedup)  LC Time 
CL  1k  20  18s  11h  091s (5X)  DNF 
SBMCL  1k  10  27s  22h  125s (5X)  DNF 
SBMCL  1k  50  9s  4.5h  043s (5X)  DNF 
SBMCL  5k  20  54s  11h  309s (6X)  DNF 
SBMCL  10k  20  232s  10h  .32m(8X)  DNF 
SBMCL  50k  20  26m  10h  i04h (9X)  DNF 
BTER^{2}  1k  20  3s  87m  012s (4X)  6h 
Figure 4  1k  20  21s  6.5h  103s (5X)  DNF 
Figure 5  100  biennial  4s  43m  016s (4X)  13h 
[Voeten2012]  200  annual  10s  3h  093s (9X)  DNF 
Enron  150  weekly  1s  7.5h  001s (1X)  60h 

EM for EdgeMonitoring (running time includes both KL and Euclidean), EMKS for EdgeMonitoring with KS test, DC for DeltaCon, LC for LetoChange. EM and DC are implemented in MATLAB while LC in Python. All run on a commercial desktop with 48hrs as time limit. Each running time averaged over 5 runs.

BTER dataset has 800 snapshots
5.2 Real World Data
Senate cosponsorship network([Fowler2006]) We construct a cosponsorship network from bills (co)sponsored in US Senate during the 93rd108th Congress. An edge is formed between two congresspersons if they cosponsored the same bill. Each bill corresponds to a snapshot, and forms a clique of cosponsors. A window is set to include all bills in a single Congress (Biennially).
We randomly selected 250 dyads and tracked their fluctuations across the Congresses. We start from the 97th Congress since full amendments data is available only from 97th session onwards. Figure 5 compares EdgeMonitoring+KL, DeltaCon and LetoChange. All methods were able to detect the most significant change point at the 104th Congress. Fowler [Fowler2006] points out that there was a “Republican Revolution” in the 104th Congress which “caused a dramatic change in the partisan and seniority compositions
.” The author also points out the significance of the 100th (highest clustering coefficient, significant collaboration) and 104th Congress (lowest clustering coefficient, low point in collaboration) as inflection points in the Senate political process. Both our EdgeMonitoring approach and LetoChange classify these two Congresses as change points, but the latter takes much more time. DeltaCon picks up on one (104th) and not the other (100th). This provides evidence that our algorithm is able to capture the changes in network evolution effectively while being significantly faster than the stateoftheart.
6 Conclusion
In this paper, we develop a change point detection algorithm for dynamic networks that is efficient and accurate. Our approach relies on sampling and comparing the estimated joint edge (dyad) distribution. We first develop a maximum likelihood estimator, and analyze its drawbacks for small window sizes (the typical case). We then develop a consistent and unbiased estimator that overcomes the drawbacks of the MLE, resulting in significant quality improvement over the MLE. We conduct a thorough evaluation of our change point detection algorithm against two stateoftheart DeltaCon and LetoChange on synthetic as well as the real world datasets. Our results indicate that our method is up to 9X faster than DeltaCon while achieving better quality. In the future we plan to extend our work to track higher order structures of the network such as 3profiles
[Elenberg et al.2015] or 4profiles and see how they evolve over time.Acknowledgments
This work is supported in part by NSF grant DMS1418265, IIS1550302 and IIS1629548. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
References
 [Akoglu et al.2014] Leman Akoglu, Hanghang Tong, and Danai Koutra. Graphbased anomaly detection and description: A survey. Data Mining and Knowledge Discovery (DAMI), 28(4), 2014.
 [Berlingerio et al.2012] Michele Berlingerio, Danai Koutra, Tina EliassiRad, and Christos Faloutsos. Netsimile: a scalable approach to sizeindependent network similarity. arXiv preprint arXiv:1209.2684, 2012.
 [Bridges et al.2015] Robert A Bridges, John P Collins, Erik M Ferragut, Jason A Laska, and Blair D Sullivan. Multilevel anomaly detection on timevarying graph data. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, pages 579–583. ACM, 2015.
 [Caceres and BergerWolf2013] Rajmonda Sulo Caceres and Tanya BergerWolf. Temporal scale of dynamic networks. In Temporal Networks, pages 65–94. Springer, 2013.
 [Elenberg et al.2015] Ethan R Elenberg, Karthikeyan Shanmugam, Michael Borokhovich, and Alexandros G Dimakis. Beyond triangles: A distributed framework for estimating 3profiles of large graphs. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 229–238. ACM, 2015.
 [Erdős and Rényi1960] Paul Erdős and A Rényi. On the evolution of random graphs. Publ. Math. Inst. Hungar. Acad. Sci, 5:17–61, 1960.
 [Fowler2006] James H Fowler. Legislative cosponsorship networks in the US House and Senate. Social Networks, 28(4):454–465, 2006.
 [Givens and Hoeting2012] Geof H Givens and Jennifer A Hoeting. Computational statistics, volume 710. John Wiley & Sons, 2012.
 [Hunter et al.2012] David R Hunter, Pavel N Krivitsky, and Michael Schweinberger. Computational statistical methods for social network models. Journal of Computational and Graphical Statistics, 21(4):856–882, 2012.
 [Karrer and Newman2011] Brian Karrer and Mark EJ Newman. Stochastic blockmodels and community structure in networks. Physical Review E, 83(1):016107, 2011.
 [Klimt and Yang2004] Bryan Klimt and Yiming Yang. The enron corpus: A new dataset for email classification research. In Machine learning: ECML 2004, pages 217–226. Springer, 2004.
 [Koutra et al.2016] Danai Koutra, Neil Shah, Joshua T Vogelstein, Brian Gallagher, and Christos Faloutsos. Deltacon: Principled MassiveGraph Similarity Function with Attribution. ACM Transactions on Knowledge Discovery from Data (TKDD), 10(3):28, 2016.
 [La Fond et al.2014] Timothy La Fond, Jennifer Neville, and Brian Gallagher. Anomaly detection in networks with changing trends, 2014.
 [Leskovec and Rok Sosič2014] Jure Leskovec and Rok Sosič. SNAP: A general purpose network analysis and graph mining library in C++. http://snap.stanford.edu/snap, Jun 2014.
 [Li et al.2016] Shuang Li, Yao Xie, Mehrdad Farajtabar, and Le Song. Detecting weak changes in dynamic events over networks. arXiv preprint arXiv:1603.08981, 2016.
 [Loglisci et al.2015] Corrado Loglisci, Michelangelo Ceci, and Donato Malerba. Relational mining for discovering changes in evolving networks. Neurocomputing, 150:265–288, 2015.
 [mat] Wolfram mathematica. https://www.wolfram.com/mathematica/. Accessed: 20170603.
 [Moreno and Neville2013] Sebastian Moreno and Jennifer Neville. Network hypothesis testing using mixed kronecker product graph models. In Data Mining (ICDM), 2013 IEEE 13th International Conference on, pages 1163–1168. IEEE, 2013.

[Peel and Clauset2015]
Leto Peel and Aaron Clauset.
Detecting change points in the largescale structure of evolving
networks.
In
TwentyNinth AAAI Conference on Artificial Intelligence
, 2015.  [Peixoto and Rosvall2015] Tiago P Peixoto and Martin Rosvall. Modeling sequences and temporal networks with dynamic community structures. arXiv preprint arXiv:1509.04740, 2015.
 [Pfeiffer III et al.2012] Joseph J Pfeiffer III, Timothy La Fond, Sebastian Moreno, and Jennifer Neville. Fast generation of large scale social networks with clustering. arXiv preprint arXiv:1202.4805, 2012.
 [Pitman1937] Edwin JG Pitman. Significance tests which may be applied to samples from any populations. Supplement to the Journal of the Royal Statistical Society, 4(1):119–130, 1937.
 [Ranshous et al.2015] Stephen Ranshous, Shitian Shen, Danai Koutra, Steve Harenberg, Christos Faloutsos, and Nagiza F Samatova. Anomaly detection in dynamic networks: a survey. Wiley Interdisciplinary Reviews: Computational Statistics, 7(3):223–247, 2015.
 [Seshadhri et al.2012] C Seshadhri, Tamara G Kolda, and Ali Pinar. Community structure and scalefree collections of erdősrényi graphs. Physical Review E, 85(5):056109, 2012.
 [Shih and Parthasarathy2012] YuKeng Shih and Srinivasan Parthasarathy. Identifying functional modules in interaction networks through overlapping markov clustering. Bioinformatics, 28(18):i473–i479, 2012.
 [Voeten2012] Erik Voeten. Data and analyses of voting in the UN general assembly. Available at SSRN 2111149, 2012.
 [Zhang et al.2016] Xiao Zhang, Cristopher Moore, and MEJ Newman. Random graph models for dynamic networks. arXiv preprint arXiv:1607.07570, 2016.
Comments
There are no comments yet.