The field of adaptive signal processing has found success in a vast number of applications, from MIMO communication through to real-time machine learning[1, 2, 3]. Its adoption in less conventional data structures has also been growing with many recent results in quaternions [4, 5]
and tensors. The least mean square (LMS) algorithm  has been the first fundamental adaptive filtering strategy in all these domains, and the rapidly growing interest in adaptive filtering of multivarite/multiway data types comes as a consequence of the increasing availability of multisensor/multinode data acquisition devices. Typically, the measurement is obtained from a large-scale sensor array, with possibly sparse and arbitrarily distributed sensors which provide streaming data. These challenges require us to move further the traditional methods and problem formulation of adaptive signal processing and to introduce domain-specific solutions.
This paper considers the general scenario of irregular sensor structures represented as a graph, whereby the underlying statistical model follows graph-topological structure. The existing work in signal processing on graphs includes the tracking the time-varying graph signals [8, 9] and the use of space transforms and dimensionality reduction to reduce the problem complexity [10, 11]. These results assume an autoregressive model for graph data, and have recently been generalized to autoregressive moving average models .
It is important to note that the common assumption made in much of the existing work is that the graph topology is known a priori
. However, in many network-related problems, like social media, financial assets, or neuron connectivity, the topology (relationship between nodes, assets or neurons) needs to be learned, not to mention that the topology is also often time-varying. To discover the topology of the GSO which generated the observed graph data, the work in assumes a vertex-time autoregressive casual process; however, like all above mentioned articles, this offers a batch method where all the data are considered at once. To the best of the authors’ knowledge, a truly adaptive approach to this problem is still lacking.
To this end, we propose an online adaptive filtering algorithm for streaming graph data. Similarly to 
, we design the algorithm in a system identification setting, whereby the task boils down to recovering the structure of the underlying GSO. The proposed approach employs modified stochastic gradient descent methods[4, 6], in addition to graph-specific structures such as sparsity or commutativity, which are enforced naturally by graph topology. As the existing work has already shown the potentials and limits of the graph topology identification problem, this paper aims to further explore the possibility of applying the techniques of adaptive signal processing to random graph processes. The paper is organized as follows. Section II reviews the necessary background. Then, the main problem is outlined in Section III, with the proposed algorithm introduced in Section IV. In Section V, an empirical simulation is performed to validate the capability of the proposed adaptive graph filter. Finally, conclusions are provided in Section VI.
Ii Basics of Graph Signal Processing
This section reviews the structure of graph signals, system on graphs based on random graph processes, and some relevant notions of weakly stationary graph processes.
Ii-a Graph Signals
Consider a weighted random graph, , where the vertex set , is the edge set, and the matrix is the associated shift operator whose entries only if . The matrix captures the local, usually sparse, patterns of , the examples of which include (weighted) adjacency matrix, graph Laplacian, and their respective generalized forms [14, 15].
A graph signal is then a function which maps the vertex set, , onto the set of real or complex numbers, e.g.
, and is conveniently represented by a vectorwhere denotes the signal value at vertex . At a particular time instance, the interaction of all elements of a graph signal are modelled according to the graph shift operator (GSO),
, which represents a linear transformation which describes how the graph signals localize across the network.
Similar to the standard shift in time, we can introduce a graph filter which shifts in vertices, , defined as a polynomial of graph shift operators in the form
where is a vector of coefficients; the definition of is given in this way to ease the problem formulation later in the paper. It is noteworthy that the filter is commutative with respect to the shift operator , that is
This property, called the shift invariance
, will be necessary in the estimation ofas it implies that the structures of graph processes are not entirely arbitrary.
If is an adjacency matrix, then its entries for and for . When used as a GSO, a graph Laplacian (of ) will have a zero row sum with entries for and for . Other alternative GSOs have also been recently proposed , the most suitable choice of which will depend on the application at hand. For example, electric circuits are mainly modelled using adjacency matrices while diffusion-on-graph problems naturally employ the Laplacian. Here, to maintain the generality of this study, the only two assumptions made on are the shift invariance and sparsity, common features shared by most GSOs in practice [14, 15].
Ii-B Vertex-Time ARMA Processes
It is important to note that the shift across vertices does not account for the shift in time which reflects the dynamics of real-world signals. Consider a general time-varying -dimensional signal, , generated from another time-varying -dimensional signal, , through a multivariate autoregressive moving average (ARMA) graph process, to give
where and are coefficient matrices of
, so that these matrices are not fixed. For a graph signal, the coefficients (matrix elements) explain how each dimension interacts with all others, and will naturally assume a form of graph shift operators. An intuitive approach would be for the coefficients to assume a form of a graph filter, although there are other interesting basis functions to consider as an alternative, e.g. radial basis functions. Much existing literature[8, 9, 10, 11, 12, 13] employs polynomial graph filter, also adopted here, whereby and in (3) can be expressed as
where and are integers denoting the maximal shifts of the specific graph filters. With eqs. 5 and 4, and become graph signals, of which the elements relate to their respective vertex. The values of and are arbitrary and have to be determined for every problem at hand . In this work, we narrow down the scope of the problem by restricting the random graph process to be purely autoregressive and causal , thus reducing (3) to
The causality assumption in (6) implies that and this interpretation is interesting in that the the maximum vertex shifts at a particular time lag cannot exceed the time lag itself. This signifies that a shift in vertices occurs in tandem with a shift in time i.e. no more than one shift operator is allowed per time instance. This assumption is rather reasonable as we are analyzing discrete-time models where the sampling policy can be adjusted accordingly.
Ii-C Weak Stationarity
We shall now briefly explain how the shift invariance of the graph filter can be interpreted as a form of ‘stationarity’. Analogous to the autocorrelation in time series, the autocorrelation of a graph signal should depend on the ‘distance’ of vertex shifts regardless of the position in vertex domain where the signal initially resides, i.e. however many times the signal has been shifted. Therefore, as long as the total number of shifts is the same, then so should be the correlation. This property is called ‘weak stationarity’ as in Defnition 1 in . While the concept of stationarity in graphs is still an open research topic, we employ the notion in  as it suits our problem setting since the AR model in (6) is inherently weakly stationary, as it is made up of a sum of shift-invariant graph filters.
Iii Regularized Least Squares Estimation
pertains to the class of multivariate linear regression problems, for which the optimal linear estimator is the MSE estimator. Here, we adopt the least squares method - a deterministic counterpart of the MSE estimator [2, 4, 6]. The least squares problem of eqs. 7 and 6 is then given by
where with and is the order of this AR random graph process, for and . Observe that (8) represents a non-convex polynomial problem with many minima, for which many solutions have been proposed, none of which guarantees a global optimum, even under some quite restrictive assumptions . A more plausible metric would be therefore to identify whether an edge between any pair of vertices exist with the least chance of misses and false alarms, and the order should be as small as possible. The above setting implies that and should be sparse, so that rather than solving the polynomial problem, we can cast (8) into alternating steps of regularized least squares sub-problems, outlined in the following sections.
Iii-a Solving for
The minimization problem in eq. 8 is now solved with respect to , instead of and ; this makes the problem quadratic in and hence standard stochastic gradient descent is applicable. Denote by an estimate of . With the assumption of sparsity, we now arrive at the optimization problem,
where , vec is a vectorization operator and is an norm, while is a constant which adjusts the degree of sparsity of the corresponding . From (7), it is obvious that grows less sparse with an increase in , and thus should be set in a decreasing fashion.
The final term makes the above problem of a quartic programming type, rendering the convergence analysis more difficult. This becomes evident in the simulations where the addition of this regularizer did not improve the algorithm performance significantly, insteadeven slightly deteriorate it in some cases.
Iii-B Estimating from
From (1) and (7), observe that is a linear function of and thus its estimate, , could represent a good estimate of , that is, . To find a true after obtaining from (9), another regularized least squares sub-problem is needed, and given by
The rightmost term ensures that is as commutative as possible with all to ensure the shift invariance property. Note that when (11) is employed to calculate , the shift invariance property has already been enforced so that this optimization sub-problem might be bypassed by setting . The implementation strategy is further elucidated in the simulation section.
Note that contains all possible combination of past vertex-time instances of the graph signal, . While appears rather large, in practice, the actual order is quite low, with even a likely case. In addition, this step is optional as our main goal is to recover .
Iv Adaptive Graph Signal Processing
We now proceed to build upon the alternating optimization problem detailed in the previous section, to introduce a class of adaptive algorithms based on the optimization criterion in (8), with a sparse solution, called sparsity-aware adaptive algorithm. Although many methods, such as -regularized least mean square  or oracle algorithm , lead to convergence with competitively small MSE, these solutions are rarely sparse, if not at all , as they do not explicitly zero out the elements of the GSO matrix like their offline counterparts such as basis pursuit. On the other hand, methods like ADMM or ALM have proved valuable in solving offline -regularized problems, and in our endeavor, as the topic is still in its infancy, we adopt the standard stochastic gradient descent but rewrite our main cost function to naturally enforce the solution to ‘project’ on acceptable values  (the prospect of online ADMM and ALM is promising if the underlying algorithm - as in this paper - is designed to work well). To this end, we re-formulate (9) and (11) to (13) by splitting the desired variables (, and ) into their positive and negative parts, that is
where and contain respectively only the positive and negative parts of . Note that if is an adjacency matrix, then which makes the problem easier. For the Laplacian, this is much more difficult because while clearly it can be split into the positive on- and negative off-diagonals, the real bottleneck is the zero row sum, an equality constraint which is awkward to solve iteratively as it could involve Lagrangian methods. Since the stucture of GSO vary with applications, we here study the general unconstrained for generality and analytic insights.
Iv-a Form of the Algorithm
Based on (15), the operator can be expressed through a product-weighted sum i.e.
where is a trace operator, and and both have all their elements equal to . These formulae enable us to separate the derivatives with respect to the positive and negative parts, where gradient projection can be used to force invalid values to zero, leading to sparsity as a by-product. We now proceed to minimize (11) by calculating the gradient with respect to , denoted by , and given by
Note that denotes at the time instant in the algorithm. Now, let , , be respectively defined as
and with the following variables,
where is given by
We can now express the update for as a gradient projection, that is
where is a diagonal matrix of stepsizes. Similarly, we can obtain the update equation for as
The next step involves finding the shift operator . For example, we can easily let , with the derivation so far based on (11) where the commutative property of is already taken into account. Another approach may employ a simplified version of (9) with for all . Since the commutativity is not enforced in eq. 9, but is needed when estimating , we repeat the same procedure as in (12), resulting in the following,
Where is an adjacency matrix, and are both set to zero for all .
Since the objective functions in (9) and (11) are not pure MSE with regularizing terms, this makes them multi-convex and the solution will thus be biased  and not optimal in terms of MSE. The whole procedure to this point has been to identify the causative elements of an GSO without necessarily their correct values. To this end, we employ an approach known as debiasing, where in order to eliminate the regularization biases, we fix the zero entries of the obtained GSO , and only optimize the non-zero entries via a least squares cost. It comes with a caveat that, by reducing data sample size, the noise could be distorted from normality, thus affecting the minimal MSE criterion from the outset  if the original data is rather noisy, or of insufficiently large size.
The final step, the calculation of , is discretionary as our prime purpose is to identify and as stated above, the components of may not be accurately computed due to the biased objectives, leading to even erroneous . On the other hand, if desiring to fully identify the temporal structure of the vertex-time AR process, we can solve for via (13) and (14), but needs to be debiased in order for to be mathematically meaningful. Unlike the two earlier optimiztion sub-problems, is not strictly conditioned, and its sparsity constraint aims mainly to render the model succinct. Hence, GAR-LMS  is employed to arrive at the update equation of , given by
with a small positive number. This step could be further simplified by only taking the instantaneous samples into (33), that is, , to yield
The so derived algorithms are summarized in Algorithms 1 & 2.
As mentioned earlier, two paths are possible for Algorithm 1; either to ignore Step , i.e. , and consider only Step (we will call this Path 1), or vice versa - to consider Step and ignore Step by letting (Path 2).
Iv-B Fine-tuning Peripheral Parameters
For desirable accuracy and fidelity of the outcome, there are still some minor parameters which need to be fine tuned. These include the regularization constants , , and ; stepsizes , and ; and the forgetting factor .
For the -norm related constants, these can be initially expressed via 
where is a constant vector with entry values decreasing with , and is a block of . For the stepsizes, Armijo backtracking is employed to yield suitable values of , and , while the parameters , , and have to be determined manually. Notice that while some prior knowledge is available for (decreasing-valued entries) and (closed to unity), and are rather unconstrained.
Iv-C Discussion on Convergence
A rigorous convergence analysis of the graph random processes can be found in . However, the assumptions for successful convergence are quite restrictive because the graph signal has not only to obey specific sparsity structure (Assumption A5 in ), but also to exhibit a very strong stability condition (Assumptions A4 and A6 in ), to which only a few classes of topologies conform, like -regular graphs. While the proof in  is without doubt rigorous, it is largely theoretical and limited to real-world cases. Attempts to relax the assumptions underpinning the proof have had limited success; this is partly due to that fact that the base problem (9) is inherently biased; for example, the -norm regularizing terms usually exhibit a side effect of underestimating the non-zero elements , not to mention a more complex commutator term. Therefore, it may be more favorable to take a different convergence measure. In other words, rather than the mean squared error, we could use the percentage of correctly recovered elements of , regardless of their correct values, a topic of future work.
V Simulation Results
Since the suitable choice of the GSOs varies with applications, in our simulation, we tested our algorithm with synthetic graph processes which are consistent with the underlying assumptions of sparsity and shift invariance. Three different topologies of graphs were considered: arbitrarily random graph (R), random graph with power-law degree distribution (PL) , and Stochastic Block-Model (SBM) . For each topology, the synthetic graph signal was generated by feeding an i.i.d. input signal into the stochastic processes in (6) and (7), with the number of vertices and time lag order throughout all simulations.
V-a Convergence Performance against NMSE
In the first experiment, we examined how the overall algorithm performs in terms of the normalized mean squared error (NMSE) of and , respectively denoted by
where indicates the Frobenius norm. The GSO, , was generated following the R/PL/SBM topologies chosen at random with 20 realizations in total. For an arbitrarily random topology, the weighted edges were drawn from and then thresholded to between and times the maximum absolute value of the components. Finally, the GSO matrix was normalized by
times its largest eigenvalue (to ensure stable processes).
The PL topology started from three random initial nodes connecting one another with probability; then new nodes were connected with the probability following the preferential attachment process  which is proportional to the total weight of the existing nodes. If connected, the weighted edges were drawn from and thresholded to between and times the maximum absolute value of the components, together with normalizing the GSO matrix by times its largest eigenvalue. In the SBM case, the network was clustered into groups of and vertices each. The inter-/intra-cluster probability of connection was allocated by matrix in the form of
. Then, all the assigned edges were weighted by an exponential distribution with the rateand the matrix was finally normalized by times its largest eigenvalue. All these specifications yielded the sparsity in of approximately .
After was created, was obtained with the coefficients for and , generated sparsely from a mixture of distribution . The data was created for over 1100 samples, with first 500 samples left out due to their transient behavior, and the latter 600 samples kept for the simulation. We employed Algorithm 1 (Path 1) for the first 400 samples and Algorithm 2 for the remaining 200 samples, to recover and , with the hyper-parameters , , , chosen from the interval with the step , from with the step and from with the step
. For this specific experiment, the selected hyperparameters would minimize the steady-statei.e. the averaged for such that is in steady state. This step was repeated 20 times to obtain 20 realizations which were then averaged to display the outcome.
In terms of NMSE, the regularized algorithm (Algorithm 1) failed to minimize the ‘normed’ error of both and , diverging away and levelling at a certain level; a jittery pattern was observed in the NMSE of . Afterwards, the debiasing process (Algorithm 2) managed to significantly reduce the error to a very low level, as expected from a generic adaptive algorithm. At the first glance, one may question the utility of the first step (Algorithm 1) as the standard stochastic algorithm (Algorithm 2) can accomplish the whole task. The answer is that Algorithm 2 (debiasing) only manipulates the explanatory part of the , determined by Algorithm 1. Therefore, it would be a disadvantage to neglect the capability of identifying the correct topology of , which is considered in the next experiment.
V-B Identifiability of the GSO Topologies
The same data formulation as in the previous section was used Here, we focus on the likelihood that Algorithm 1 would successfully identify the right topology i.e. the non-zero elements of , regardless of their correct values. To this end, we compared the rate of false alarm (taking zero element as non-zero) and miss (failing to identify non-zero element) for every specific topology. Each case was calculated based on the average of repeated random trials.
As a consistent benchmark of the outcome, we tuned the hyperparameters such that, in each simulation, the sum of total false alarms () and total misses () was minimal with respect to the hyperparameter grids described in the previous experiment. We tested our data twice with the Path 1 and the Path 2 of Algorithm 1, to establish if this shifting order of the commutator term affects the learning performance. Table I shows the probabilities of false alarm () and miss () via Path 1 of Algorithm 1 while Table II shows those of Path 2. Observe that Path 1 (using commutator term when estimating
rather than at solving for ) yielded more accurate recovery than the Path 2 (using the commutator term together with solving for and letting ). Regarding topology-wise comparison, the results expectedly show that specific topologies affect algorithm performance. When testing the arbitrarily random topology, we noticed the resulting recovery was not consistent, as indicated by the sum of and varying considerably from experiment to experiment. Fig 2 (a) shows one of random-topology trials which are close to the average. When considering with a clearly specified topology (SBM and PL), the recovery rate was more consistent with most SBM and PL trials, giving the recovery outcome close to the average. Fig 2 (b) and (c) visualize trial cases for both topologies.
It should be noted that the recovery results of SBM and PL topologies were not outstanding as some trails of the arbitrarily random topology gave more precise identification; an example is shown in Fig. 2. While the structure of SBM and PL graphs ensured that the algorithm was less susceptible to identifying wrong edges, they disproportionately lacked in the ability to detect all the right ones (their ’s were quite high compared to the random benchmark). When attempting to reduce high ’s, their outgrew the intended reduction; in other words, after reaching some point, the algorithm began to wrongly identify edges at a rapid rate. Visually, we still could not distinguish what characteristics of would render the algorithm more effective.
Finally, we would like to mention that these simulations were run on a small-scale problem due to computational limits, and hence the size of and trails per topology could give biased and more varying results compared with the experiments involving thousands of nodes and hundreds of trials . Nevertheless, the findings in this work indicate that there is much more room to discover in this research topic, since topology constraints play a crucial role in selecting appropriate optimization techniques to devise learning algorithms. This already suggests that other topological statistics like graph diameter, node degrees and many others could help with the design of the optimal algorithms.
This paper presents a first design of adaptive graph signal processing implemented jointly by formulating a problem and devising a novel online algorithm accordingly. The model is based on vector autoregression (VAR) where the coefficient matrices are constrained by graph topology via a graph shift operator (GSO). The vertex-time relationship has been explained through a graph filter (vertex part) embedded into a VAR time series (time part), where causality has been imposed on the model to determine lag characteristics of the vertex-time models. To alleviate the non-convex nature arising from the polynomial structure of the graph filters, the problem has been divided into three sub-problems which themselves are re-expressed as convex problems. The algorithm has then been derived based on the split gradient projection method  which groups the first two sub-problems into Algorithm 1 and the last into Algorithm 2. The reason is that the method is expected to produce biased results due to heavy-handed regularization of the problem. Therefore, after Algorithm 1, only the non-zero entries of the resulting GSO are computed in Algorithm 2 to debias the solution. Finally, the experiments have been carried out to illustrate the potentials and limits of the proposed method. The fine-tuning of hyperparameters poses another challenge as the empirical results from the algorithm is shown to be highly susceptible to how these hyperparameters are collectively set.
-  D. P. Mandic and J. Chambers. 2001. Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. John Wiley & Sons, Inc., New York, NY, USA.
-  P. S. R. Diniz, Adaptive Filtering: Algorithms and Practical Implementation, Springer Science & Business Media, 2012.
-  J. W. Tao and W. X. Chang, “A novel combined beamformer based on hypercomplex processes,” IEEE Trans. Aero. Electron. Sys., vol. 49, no. 2, pp. 1276-1289, Apr. 2013.
-  T. Variddhisaï and D. P. Mandic, “On an RLS-like LMS adaptive filter,” in Proc. 22nd Int. Conf. Digital Signal Process. (DSP), London, 2017, pp. 1-5.
-  C. Took and D. P. Mandic, “A quaternion widely linear adaptive filter,” IEEE Trans. Signal Process., vol. 58, no. 8, pp. 4427-4431, Aug. 2010.
-  T. Variddhisaï and D. P. Mandic, “Online Multilinear Dictionary Learning,” arXiv, 1703.02492, cs.LG, 2017
-  B. Widrow, M.E. Hoff, “Adaptive switching circuits,” in IRE Wescon Conv Rec 4, pp. 96–104.
-  P. Di Lorenzo, S. Barbarossa, P. Banelli and S. Sardellitti, “Adaptive Least Mean Squares Estimation of Graph Signals,” IEEE Trans. Signal Info. Process. Net., vol. 2, no. 4, pp. 555-568, Dec. 2016.
-  P. Di Lorenzo, P. Banelli, E. Isufi, S. Barbarossa and G. Leus, “Adaptive Graph Signal Processing: Algorithms and Optimal Sampling Strategies,” IEEE Trans. Signal Process., vol. 66, no. 13, pp. 3584-3598, Jul. 2018.
-  K. Qiu, X. Mao, X. Shen, X. Wang, T. Li and Y. Gu, “Time-Varying Graph Signal Reconstruction,” IEEE Jour. Select. Topic. Sig. Process., vol. 11, no. 6, pp. 870-883, Sep 2017.
-  D. Romero, V. N. Ioannidis and G. B. Giannakis, “Kernel-Based Reconstruction of Space-Time Functions on Dynamic Graphs,” IEEE Jour. Select. Topic. Sig. Process., vol. 11, no. 6, pp. 856-869, Sep 2017.
-  E. Isufi, A. Loukas, N. Perraudin and G. Leus, “Forecasting Time Series With VARMA Recursions on Graphs,” IEEE Trans. Signal Process., vol. 67, no. 18, pp. 4870-4885, Sep 2019.
-  J. Mei and J. M. F. Moura, “Signal Processing on Graphs: Causal Modeling of Unstructured Data,” IEEE Trans. Signal Process., vol. 65, no. 8, pp. 2077-2092, Apr 2017.
-  L. Stankovic, D. P. Mandic, M. Dakovic, I. Kisil, E. Sejdic and A. G. Constantinides, “Understanding the Basis of Graph Signal Processing via an Intuitive Example-Driven Approach [Lecture Notes],” IEEE Sig. Process. Mag., vol. 36, no. 6, pp. 133-145, Nov 2019.
-  A. G. Marques, S. Segarra, G. Leus and A. Ribeiro, “Stationary Graph Processes and Spectral Estimation,” IEEE Trans. Signal Process., vol. 65, no. 22, pp. 5911-5926, Nov 2017.
-  B. Scalzo-Dees, L. Stankovic, M. Dakovic, A. G. Constantinides and D. P. Mandic, “A Class of Doubly Stochastic Shift Operators for Random Graph Signals and their Boundedness,” arXiv, 1908.01596, eess.SP, 2019
Probability, Random Variables and Stochastic Processes. McGraw-Hill Education; 4th edition 2002.
-  Y. Chen, Y. Gu and A. O. Hero, “Sparse LMS for system identification,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP), Taipei, pp. 3125-3128, 2009.
-  R. C. de Lamare and R. Sampaio-Neto, “Sparsity-Aware Adaptive Algorithms Based on Alternating Optimization and Shrinkage,” IEEE Signal Process. Lett., vol. 21, no. 2, pp. 225-229, Feb 2014.
-  O. Taheri and S. A. Vorobyov, “Sparse channel estimation with lp-norm and reweighted l1-norm penalized least mean squares,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP), Prague, 2011, pp. 2864-2867.
-  M. Schmidt, G. Fung, R. Rosales, Fast Optimization Methods for L1 Regularization: A Comparative Study and Two New Approaches. Lecture Notes in Computer Science, vol 4701. Springer, Berlin, Heidelberg, 2007.
-  D. L. Donoho, “De-noising by soft-thresholding,” IEEE Trans. Info. Theo., vol. 41, no. 3, pp. 613-627, May 1995.
-  S. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinvesky, “A Method for Large-Scale -Regularized Least Squares Problems With Applications in Signal Processing and Statistics,” Dept. Elect. Eng., Stanford Univ., 2007 [Online].
-  A. Barabasi and R. Albert, “Emergence of Scaling in Random Networks,” Science, vol. 286, no. 5439, pp. 509–512, Oct 1999.
-  B. Karrer and M. E. J. Newman, “Stochastic blockmodels and community structure in networks,” Phys. Rev. E, Stat., Nonlin., Soft Matt. Phys., vol. 83, no. 1 Pt 2, pp. 016-107, Jan 2011.