Spectral redemption: clustering sparse networks

06/24/2013 ∙ by Florent Krzakala, et al. ∙ 0

Spectral algorithms are classic approaches to clustering and community detection in networks. However, for sparse networks the standard versions of these algorithms are suboptimal, in some cases completely failing to detect communities even when other algorithms such as belief propagation can do so. Here we introduce a new class of spectral algorithms based on a non-backtracking walk on the directed edges of the graph. The spectrum of this operator is much better-behaved than that of the adjacency matrix or other commonly used matrices, maintaining a strong separation between the bulk eigenvalues and the eigenvalues relevant to community structure even in the sparse case. We show that our algorithm is optimal for graphs generated by the stochastic block model, detecting communities all the way down to the theoretical limit. We also show the spectrum of the non-backtracking operator for some real-world networks, illustrating its advantages over traditional spectral clustering.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Spectral Clustering and Sparse Networks

Figure 1: The spectrum of the adjacency matrix of a sparse network generated by the block model (excluding the zero eigenvalues). Here , , and , and we average over realizations. Even though the eigenvalue given by (2) satisfies the threshold condition (1) and lies outside the semicircle of radius , deviations from the semicircle law cause it to get lost in the bulk, and the eigenvector of the second largest eigenvalue is uncorrelated with the community structure. As a result, spectral algorithms based on are unable to identify the communities in this case.

In order to study the effectiveness of spectral algorithms in a specific ensemble of graphs, suppose that a graph is generated by the stochastic block model blockmodel1 . There are groups of vertices, and each vertex has a group label . Edges are generated independently according to a matrix

of probabilities, with

. In the sparse case, we have

, where the affinity matrix

stays constant in the limit .

For simplicity we first discuss the commonly-studied case where has two distinct entries, if and if . We take with two groups of equal size, and assume that the network is assortative, i.e., . We summarize the general case of more groups, arbitrary degree distributions, and so on in subsequent sections below.

The group labels are hidden from us, and our goal is to infer them from the graph. Let denote the average degree. The detectability threshold decelle-etal1 ; decelle-etal2 ; mossel-neeman-sly states that in the limit , unless

(1)

the randomness in the graph washes out the block structure to the extent that no algorithm can label the vertices better than chance. Moreover, mossel-neeman-sly proved that below this threshold, it is impossible to identify the parameters and , while above the threshold the parameters and are easily identifiable.

The adjacency matrix is defined as the matrix if and otherwise. A typical spectral algorithm assigns each vertex a -dimensional vector according to its entries in the first eigenvectors of for some

, and clusters these vectors according to a heuristic such as the

-means algorithm (often after normalizing or weighting them in some way). In the case , we can simply label the vertices according to the sign of the second eigenvector.

As shown in nadakuditi-newman1 , spectral algorithms succeed all the way down to the threshold (1) if the graph is sufficiently dense. In that case ’s spectrum has a discrete part and a continuous part in the limit . Its first eigenvector essentially sorts vertices according to their degree, while the second eigenvector is correlated with the communities. The second eigenvalue is given by

(2)

The question is when this eigenvalue gets lost in the continuous bulk of eigenvalues coming from the randomness in the graph. This part of the spectrum, like that of a sufficiently dense Erdős-Rényi random graph, is asymptotically distributed according to Wigner’s semicircle law Wigner

Thus the bulk of the spectrum lies in the interval . If , which is equivalent to (1), the spectral algorithm can find the corresponding eigenvector, and it is correlated with the true community structure.

However, in the sparse case where is constant while is large, this picture breaks down due to a number of reasons. Most importantly, the leading eigenvalues of are dictated by the vertices of highest degree, and the corresponding eigenvectors are localized around these vertices KrivelevichSudakov:03 . As grows, these eigenvalues exceed , swamping the community-correlated eigenvector, if any, with the bulk of uninformative eigenvectors. As a result, spectral algorithms based on fail a significant distance from the threshold given by (1). Moreover, this gap grows as increases: for instance, the largest eigenvalue grows as the square root of the largest degree, which is roughly proportional to for Erdős-Rényi graphs. To illustrate this problem, the spectrum of for a large graph generated by the block model is depicted in Fig. 1.

Other popular operators for spectral clustering include the Laplacian where is the diagonal matrix of vertex degrees, the random walk matrix , and the modularity matrix . However, all these experience qualitatively the same difficulties as with in the sparse case. Another simple heuristic is to simply remove the high-degree vertices (e.g. CO:10 ), but this throws away a significant amount of information; in the sparse case it can even destroy the giant component, causing the graph to fall apart into disconnected pieces BoJaRi:07 .

Ii The Non-Backtracking Operator

Figure 2: The spectrum of the non-backtracking matrix for a network generated by the block model with same parameters as in Fig. 1. The leading eigenvalue is at , the second eigenvalue is close to , and the bulk of the spectrum is confined to the disk of radius . Since is outside the bulk, a spectral algorithm that labels vertices according to the sign of ’s second eigenvector (summed over the incoming edges at each vertex) labels the majority of vertices correctly.

The main contribution of this paper is to show how to redeem the performance of spectral algorithms in sparse networks by using a different linear operator. The non-backtracking matrix is a matrix, defined on the directed edges of the graph. Specifically,

Using rather than addresses the problem described above. The spectrum of is not sensitive to high-degree vertices, since a walk starting at cannot turn around and return to it immediately. Other convenient properties of are that any tree dangling off the graph, or disconnected from it, simply contributes zero eigenvalues to the spectrum, since a non-backtracking walk is forced to a leaf of the tree where it has nowhere to go. Similarly one can show that unicyclic components yield eigenvalues that are either , or .

As a result, has the following spectral properties in the limit in the ensemble of graphs generated by the block model. The leading eigenvalue is the average degree . At any point above the detectability threshold (1), the second eigenvalue is associated with the block structure and reads

(3)

Moreover, the bulk of ’s spectrum is confined to the disk in the complex plane of radius , as shown in Fig. 2. As a result, the second eigenvalue is well separated from the top of the bulk, i.e., from the third largest eigenvalue in absolute value, as shown in Fig. 3.

The eigenvector corresponding to is strongly correlated with the community structure. Since is defined on directed edges, at each vertex we sum this eigenvector over all its incoming edges. If we label vertices according to the sign of this sum, then the majority of vertices are labeled correctly (up to a change of sign, which switches the two communities). Thus a spectral algorithm based on succeeds when , i.e. when (1) holds—but unlike standard spectral algorithms, this criterion now holds even in the sparse case. We present arguments for these claims in the next section.

Iii Reconstruction and a Community-Correlated Eigenvector

In this section we sketch justifications of the claims in the previous section regarding ’s spectral properties, showing that its second eigenvector is correlated with the communities whenever (1) holds. Let us start by recalling how to generalize equation (2) for the adjacency matrix of sparse graphs. We follow mossel-neeman-sly , who derived a similar result in the case of random regular graphs.

With defined as in (3), for a given integer , consider the vector

(4)

where denotes ’s community. By the theory of the reconstruction problem on trees KestenStigum:66 ; MosselPeres:03 , if (1) holds then the correlation is bounded away from zero in the limit .

We will show that if is large but small compared to the diameter of the graph, then is closely related to the second eigenvector of . Thus if we label vertices according to the sign of this second eigenvector (summed over all incoming edges at each vertex) we obtain the true communities with significant accuracy.

First we show that approximately obeys an eigenvalue equation that generalizes (2). As long as the radius- neighborhood of is a tree, we have

so

(5)

Summing over ’s neighborhood gives the expectation

and summing the fluctuations over the (in expectation) vertices at distance gives

If and (1) holds so that , these fluctuations tend to zero for large . In that case, we can identify with , and (5) becomes

(6)

In particular, in the dense case we can recover (2) by approximating with , or equivalently pretending that the graph is -regular. Then is an eigenvector of with eigenvalue .

We define an analogous approximate eigenvector of ,

where now refers to the number of steps in the graph of directed edges. We have in expectation

and as before tends to zero as increases. Identifying them gives an approximate eigenvector with eigenvalue ,

(7)

Furthermore, summing over all incoming edges gives

giving signs correlated with the true community memberships .

We note that the relation between the eigenvalue equation (7) for and the quadratic eigenvalue equation (6) is exact and well known in the theory of zeta functions of graphs Hashimoto:89 ; Bass92 ; AnFrHo:07 . More generally, all eigenvalues of that are not are the roots of the equation

(8)

This equation hence describes of ’s eigenvalues. These are the eigenvalues of a matrix,

(9)

The left eigenvectors of are of the form where obeys (6). Thus we can find by dealing with a matrix rather than a one, which considerably reduces the computational complexity of our algorithm.

Next, we argue that the bulk of ’s spectrum is confined to the disk of radius . First note that for any matrix ,

On the other hand, for any fixed , since is locally treelike in the limit , each diagonal entry of is equal to the number of vertices exactly steps from , other than those connected via . In expectation this is , so by linearity of expectation . In that case, the spectral measure has the property that

Since this holds for any fixed , we conclude that almost all of ’s eigenvalues obey . Proving rigorously that all the eigenvalues in the bulk are asymptotically confined to this disk requires a more precise argument and is left for future work.

As a side remark we note that (8) yields ’s spectrum for -regular graphs AnFrHo:07 . There are pairs of eigenvalues such that

(10)

where are the (real) eigenvalues of . These are related by , so all the non-real eigenvalues of are conjugate pairs on the circle of radius . The other eigenvalues are . For random regular graphs, the asymptotic spectral density of follows straightforwardly from the well known result of McKay:81 for the spectral density of the adjacency matrix.

Finally, the singular values of are easy to derive for any simple graph, i.e., one without self-loops or multiple edges. Namely, is block-diagonal: for each vertex , it has a rank-one block of size that connects ’s outgoing edges to each other. As a consequence, has singular values , and its other singular values are . However, since is not symmetric, its eigenvalues and its singular values are different—while its singular values are controlled by the vertex degrees, its eigenvalues are not. This is precisely why its spectral properties are better than those of and related operators.

Figure 3: The first, second and third largest eigenvalues , and respectively of as functions of . The third eigenvalue is complex, so we plot its modulus. Values are averaged over networks of size and average degree . The green line in the figure represents , and the horizontal lines are and respectively. The second eigenvalue is well-separated from the bulk throughout the detectable regime.

Iv More Than Two Groups and General Degree Distributions

The arguments given above regarding ’s spectral properties generalize straightforwardly to other graph ensembles. First, consider block models with groups, where for group has fractional size . The average degree of group is . The hardest case is where is the same for all , so that we cannot simply label vertices according to their degree.

The leading eigenvector again has eigenvalue , and the bulk of ’s spectrum is again confined to the disk of radius . Now has linearly independent eigenvectors with real eigenvalues, and the corresponding eigenvectors are correlated with the true group assignment. If these real eigenvalues lie outside the bulk, we can identify the groups by assigning a vector in to each vertex, and applying a clustering technique such as -means. These eigenvalues are of the form where is a nonzero eigenvalue of the matrix

(11)

In particular, if for all , and for and for , we have . The detectability threshold is again , or

(12)

More generally, if the community-correlated eigenvectors have distinct eigenvalues, we can have multiple transitions where some of them can be detected by a spectral algorithm while others cannot.

There is an important difference between the general case and . While for it is literally impossible for any algorithm to distinguish the communities below this transition, for larger the situation is more complicated. In general (for in the assortative case, and in the disassortative one) the threshold (12) marks a transition from an “easily detectable” regime to a “hard detectable” one. In the hard detectable regime, it is theoretically possible to find the communities, but it is conjectured that any algorithm that does so takes exponential time decelle-etal1 ; decelle-etal2 . In particular, we have found experimentally that none of ’s eigenvectors are correlated with the groups in the hard regime. Nonetheless, our arguments suggest that spectral algorithms based on are optimal in the sense that they succeed all the way down to this easy/hard transition.

Since a major drawback of the stochastic block model is that its degree distribution is Poisson, we can also consider random graphs with specified degree distributions. Again, the hardest case is where the groups have the same degree distribution. Let denote the fraction of vertices of degree . The average branching ratio of a branching process that explores the neighborhood of a vertex, i.e., the average number of new edges leaving a vertex that we arrive at when following a random edge, is

We assume here that the degree distribution has bounded second moment so that this process is not dominated by a few high-degree vertices. The leading eigenvalue of

is , and the bulk of its spectrum is confined to the disk of radius , even in the sparse case where does not grow with the size of the graph. If and the average numbers of new edges linking to its own group and the other group are and respectively, then the approximate eigenvector described in the previous section has eigenvalue . The detectability threshold (1) then becomes , or . The threshold (12) for groups generalizes similarly.

V Deriving B by Linearizing Belief Propagation

The matrix also appears naturally as a linearization of the update equations for belief propagation (BP). This linearization was used previously to investigate phase transitions in the performance of the BP algorithm CoMoVi:09 ; Urbanke ; decelle-etal1 ; decelle-etal2 .

We recall that BP is an algorithm that iteratively updates messages where are directed edges. These messages represent the marginal probability that a vertex belongs to a given community, assuming that the vertex is absent from the network. Each such message is updated according to the messages that receives from its other neighbors . The update rule depends on the parameters and of the block model, as well as the expected size of each community. For the simplest case of two equally sized groups, the BP update decelle-etal1 ; decelle-etal2 can be written as

(13)

Here and denote the two communities. The term , where and

is the current estimate of the fraction of vertices in the two groups, represents messages from the non-neighbors of

. In the assortative case, it prevents BP from converging to a fixed point where every vertex is in the same community.

The update (13) has a trivial fixed point , where every vertex is equally likely to be in either community. Writing and linearizing around this fixed point gives the following update rule for ,

or equivalently

(14)

More generally, in a block model with communities, an affinity matrix , and an expected fraction of vertices in each community , linearizing around the trivial point and defining

gives a tensor product operator

(15)

where is the matrix defined in (11).

We can also describe the linearization of BP in terms of the matrix defined in (9). Specifically, if we define and as the -dimensional vectors where and are the sum of over ’s incoming and outgoing edges respectively, then

(16)

Thus we can analyze BP to first order around the trivial fixed point by keeping track of just variables rather than of them.

This shows that the spectral properties of the non-backtracking matrix are closely related to belief propagation. Specifically, the trivial fixed point is unstable, leading to a fixed point that is correlated with the community structure, exactly when has an eigenvalue greater than . However, by avoiding the fixed point where all the vertices belong to the same group, we suppress ’s leading eigenvalue; thus the criterion for instability is where is ’s leading eigenvalue and is ’s second eigenvalue. This is equivalent to (12) in the case where the groups are of equal size.

In general, the BP algorithm provides a slightly better agreement with the actual group assignment, since it approximates the Bayes-optimal inference of the block model. On the other hand, the BP update rule depends on the parameters of the block model, and if these parameters are unknown they need to be learned, which presents additional difficulties zhang-etal . In contrast, our spectral algorithm does not depend on the parameters of the block model, giving an advantage over BP in addition to its computational efficiency.

Vi Experimental Results and Discussion

Figure 4: The accuracy of spectral algorithms based on different linear operators, and of belief propagation, for two groups of equal size. On the left, we vary while fixing the average degree ; the detectability transition given by (1) occurs at . On the right, we set and vary ; the detectability transition is at . Each point is averaged over instances with . Our spectral algorithm based on the non-backtracking matrix achieves an accuracy close to that of BP, and both remain large all the way down to the transition. Standard spectral algorithms based on the adjacency matrix, modularity matrix, the Laplacian, and the random walk matrix fail well above the transition, doing no better than chance.
Figure 5: Clustering in the case of three groups of equal size. On the left, a scatter plot of the second and third eigenvectors (X and Y axis respectively) of the non-backtracking matrix , with colors indicating the true group assignment. On the right, the analogous plot for the adjacency matrix . Here , , and . Applying -means gives an overlap using , but using .

In Fig. 4, we compare the spectral algorithm based on the non-backtracking matrix with those based on various classical operators: the adjacency matrix , the modularity matrix , the Laplacian , and the random walk matrix . We see that there is a regime where standard spectral algorithms do no better than chance, while the one based on achieves a strong correlation with the true group assignment all the way down to the detectability threshold. We also show the performance of belief propagation, which is believed to be asymptotically optimal decelle-etal1 ; decelle-etal2 .

We measure the performance as the overlap, defined as

(17)

Here is the true group label of vertex , and is the label found by the algorithm. We break symmetry by maximizing over all permutations of the groups. The overlap is normalized so that it is for the true labeling, and for a uniformly random labeling.

In Fig. 5 we illustrate clustering in the case . As described above, in the detectable regime we expect to see eigenvectors with real eigenvalues that are correlated with the true group assignment. Indeed ’s second and third eigenvector are strongly correlated with the true clustering, and applying -means in gives a large overlap. In contrast, the second and third eigenvectors of the adjacency matrix are essentially uncorrelated with the true clustering, and similarly for the other traditional operators.

Finally we turn towards real networks to illustrate the advantages of spectral clustering based on the non-backtracking matrix in practical applications. In Fig. 6 we show ’s spectrum for several networks commonly used as benchmarks for community detection. In each case we plot a circle whose radius is the square root of the largest eigenvalue. Even though these networks were not generated by the stochastic block model, these spectra look qualitatively similar to the picture discussed above (Fig. 2). This leads to several very convenient properties. For each of these networks we observed that only the eigenvectors with real eigenvalues are correlated to the group assignment given by the ground truth. Moreover, the real eigenvalues that lie outside of the circle are clearly identifiable. This is very unlike the situation for the operators used in standard spectral clustering algorithms, where one must decide which eigenvalues are in the bulk and which are outside.

In particular, the number of real eigenvalues outside of circle seems to be a natural indicator for the true number of clusters present in the network, just as for networks generated by the stochastic block model. This suggests that in the network of political books there might in fact be 4 groups rather than 3, in the blog network there might be more than two groups, and in the NCAA football network there might be 10 groups rather than 12. However, we also note that large real eigenvalues may correspond in some networks to small cliques in the graph; it is a philosophical question whether or not to count these as communities.

Note also that clustering based on the non-backtracking matrix works not only for assortative networks, but also for disassortative ones, such as word adjacency networks adjnoun , where the important real eigenvalue is negative—without being told which is the case.

A Matlab implementation with demos that can be used to reproduce our numerical results can be found at code .

Figure 6: Spectrum of the non-backtracking matrix in the complex plane for some commonly used benchmarks for community detection in real networks taken from lada ; zachary ; adjnoun ; football ; dolphins ; polbooks

. The radius of the circle is the square root of the largest eigenvalue, which is a heuristic estimate of the bulk of the spectrum. The overlap is computed using the signs of the second eigenvector for the networks with two communities, and using k-means for those with three and more communities. The non-backtracking operator detects communities in all these networks, with an overlap comparable to the performance of other spectral methods. As in the case of synthetic networks generated by the stochastic block model, the number of real eigenvalues outside the bulk appears to be a good indicator of the number

of communities.

Vii Conclusion

While recent advances have made statistical inference of network models for community detection far more scalable than in the past (e.g. decelle-etal1 ; ball-karrer-newman ; pseudolikelihood ; subsampling ) spectral algorithms are highly competitive because of the computational efficiency of sparse linear algebra. However, for sparse networks there is a large regime in which statistical inference methods such as belief propagation can detect communities, while standard spectral algorithms cannot.

We closed this gap by using the non-backtracking matrix as a new starting point for spectral algorithms. We showed that for sparse networks generated by the stochastic block model, ’s spectral properties are much better than those of the adjacency matrix and its relatives. In fact, it is asymptotically optimal in the sense that it allows us to detect communities all the way down to the detectability transition. We also computed ’s spectrum for some common benchmarks for community detection in real-world networks, showing that the real eigenvalues are a good guide to the number of communities and the correct labeling of the vertices.

Our approach can be straightforwardly generalized to spectral clustering for other types of sparse data, such as real-valued similarities between objects. The definition of extends to

where is the similarity index between and . As in the case of graphs, we cluster the vertices by computing the top eigenvectors of , projecting the rows of to the space spanned by these eigenvectors, and using a low-dimensional clustering algorithm such as -means to cluster the projected rows clustering-intro . However, we believe that, as for sparse graphs, there will be important regimes in which using will succeed where standard clustering algorithms fail. Given the wide use of spectral clustering throughout the sciences, we expect that the non-backtracking matrix and its generalizations will have a significant impact on data analysis.

Acknowledgements.
We are grateful to Noga Alon, Brian Karrer, Mark Newman, Nati Linial, and Xiaoran Yan for helpful discussions. C.M. and P.Z. are supported by AFOSR and DARPA under grant FA9550-12-1-0432. F.K. and P.Z. have been supported in part by the ERC under the European Union’s 7th Framework Programme Grant Agreement 307087-SPARCS. E.M and J.N. were supported by NSF DMS grant number1106999 and DOD ONR grant N000141110140.

References

  • (1) Holland P W, Laskey K B, Leinhardt S (1983). Stochastic blockmodels: First steps. Social Networks 5:109–137.
  • (2) Wang Y J, Wong G Y (1987). Stochastic Blockmodels for Directed Graphs. Journal of the American Statistical Association 82(397):8–19.
  • (3) Von Luxburg, U. (2007). A tutorial on spectral clustering. Statistics and computing, 17(4), 395-416.
  • (4) Bickel P J, Chen A (2009). A nonparametric view of network models and Newman-Girvan and other modularities. PNAS 106:21068–21073.
  • (5) Coja-Oghlan A, Mossel E and Vilenchik D (2009). A Spectral Approach to Analyzing Belief Propagation for 3-Coloring. Combinatorics, Probability and Computing 18: 881–912.
  • (6) Coja-Oghlan A (2010). Graph partitioning via adaptive spectral techniques. Combinatorics, Probability and Computing, 19(02):227–284.
  • (7) McSherry F (2001). Spectral partitioning of random graphs. Foundations of Computer Science, 2001. Proceedings. 42nd IEEE Symposium on, 529–537.
  • (8) Nadakuditi R R and Newman M E J (2012). Graph spectra and the detectability of community structure in networks. Phys. Rev. Lett. 108:188701.
  • (9) Decelle A, Krzakala F, Moore C, and Zdeborova L (2011). Phase transition in the detection of modules in sparse networks. Physical Review Letters 107 065701.
  • (10)

    Decelle A, Krzakala F, Moore C, and Zdeborova L (2011). Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications.

    Physical Review E 84 066106.
  • (11) Mossel E, Neeman J, Sly A (2012). Stochastic Block Models and Reconstruction. Preprint, arXiv:1202.1499v4.
  • (12) Zhang P, Krzakala F, Reichardt J, and Zdeborová L (2012). Comparative study for inference of hidden classes in stochastic block models. Journal of Statistical Mechanics: Theory and Experiment, 2012(12), P12021.
  • (13) McKay, B D (1981). The expected eigenvalue distribution of a large regular graph. Linear Algebra and its Applications 40, 203-216.
  • (14) Sasha S (2007). Random matrices, nonbacktracking walks, and orthogonal polynomials. Journal of Mathematical Physics, 48.
  • (15) Friedman J (2008). A proof of Alon’s second eigenvalue conjecture and related problems. Memoirs of the American Mathematical Society, no. 910.
  • (16) Hashimoto, Ki-ichiro (1989). Zeta functions of finite graphs and representations of p-adic groups. Automorphic forms and geometry of arithmetic varieties, 211-280.
  • (17) Alon N, Benjamini I, Lubetzky E and Sasha S (2007). Non-backtracking random walks mix faster. Communications in Contemporary Mathematics 9(4), 585–603.
  • (18) Watanabe, Y., Fukumizu, K. (2010). Graph zeta function in the Bethe free energy and loopy belief propagation. arXiv preprint arXiv:1002.3307.
  • (19) Vontobel, P. O. (2010). Connecting the Bethe entropy and the edge zeta function of a cycle code. In IEEE International Symposium on Information Theory Proceedings (ISIT), pp. 704-708.
  • (20)

    Ren, P., Wilson, R. C., Hancock, E. R. (2011). Graph characterization via Ihara coefficients. IEEE Transactions on Neural Networks, 22(2), 233-245.

  • (21) Wigner E P (1958). On the distribution of the roots of certain symmetric matrices. Ann. Math, 67(2), 325-327.
  • (22) Krivelevich M and Sudakov B (2003). The largest eigenvalue of sparse random graphs. Combinatorics, Probability and Computing 12(01), 61-72.
  • (23) Bollobas B, Svante J and Oliver R (2007). The phase transition in inhomogeneous random graphs. Random Structures & Algorithms 31.1: 3–122.
  • (24) Kesten H and Stigum B P (1966). Additional limit theorems for indecomposable multidimensional Galton-Watson processes. Ann. Math. Statist. 37:1463–1481.
  • (25) Mossel E and Peres Y (2003). Information flow on trees. The Annals of Applied Probability 13:817–844.
  • (26) Bass, H (1992). The Ihara-Selberg zeta function of a tree lattice. International Journal of Mathematics, 3(06), 717-797.
  • (27) Angel O, Friedman J and and Hoory S (2007). The non-backtracking spectrum of the universal cover of a graph. arXiv preprint arXiv:0712.0192.
  • (28) Richardson T and Urbanke R (2008). Modern coding theory. Cambridge University Press.
  • (29) Adamic L, Glance N (2005). The political blogosphere and the 2004 US Election: Divided They Blog. In Proc 3rd Intl Workshop on Link Discovery.
  • (30) Zachary W W (1977). An information flow model for conflict and fission in small groups. Journal of Anthropological Research 33:4520-473.
  • (31) Newman M E (2006). Finding community structure in networks using the eigenvectors of matrices. Physical review E, 74(3), 036104.
  • (32) Girvan M, and Newman M E (2002). Community structure in social and biological networks. Proceedings of the National Academy of Sciences, 99(12), 7821-7826.
  • (33) Lusseau D, Schneider K, Boisseau O J, Haase, P, Slooten, E, and Dawson S M. (2003). The bottlenose dolphin community of Doubtful Sound features a large proportion of long-lasting associations. Behavioral Ecology and Sociobiology, 54(4), 396-405.
  • (34) The network was compiled by Valdis Krebs and can be found on http://www.orgnet.com/.
  • (35) A matlab demo file can be found on http://panzhang.net/dea/dea.tar.gz .
  • (36) Ball, B, Karrer, B, and Newman, M E J (2011). Efficient and principled method for detecting communities in networks. Physical Review E, 84(3), 036103.
  • (37) Chen, A, Amini A A, Bickel P J and Levina E (2012). Fitting community models to large sparse networks. arXiv preprint arXiv:1207.2340.
  • (38) Gopalan P, Mimno D, Gerrish S, Freedman M and Blei D (2012). Scalable inference of overlapping communities. In Advances in Neural Information Processing Systems 25 (pp. 2258-2266).