I Results
i.1 Results on the Stochastic Block Model
Also called the planted partition model, the stochastic block model (SBM) is a popular ensemble of networks with community structure. There are groups of nodes, and each node has a group label ; thus is the true, or planted, partition. Edges are generated independently according to a matrix , by connecting each pair of nodes with probability . Here for simplicity we discuss the commonly studied case where the groups have equal size and where has only two distinct entries, if and if . We use to denote the ratio between these two entries. In the assortative case, and . When is small, the community structure is strong; when , the network becomes an ER graph.
For a given average degree
, there is a socalled detectability phase transition
decelleetalprl ; decelleetalpre , at a critical value(2) 
For , BP can label the nodes with high accuracy; for , neither BP nor any other algorithm can label the nodes better than chance, and indeed no algorithm can distinguish the network from an ER graph with high probability. This transition was recently established rigorously in the case mossel2012stochastic ; massoulie2013community ; mossel2013proof .
For larger numbers of groups, the situation is more complicated. For , in the assortative case, this detectability transition coincides with the KestenStigum bound kesten_stigum_1 ; kesten_stigum_2 . For the KestenStigum bound marks a conjectured transition to a “hard but detectable” phase where community detection is still possible but takes exponential time, while the detectability transition is at a larger value of ; that is, the thresholds for reconstruction and robust reconstruction become different. Our claim is that our algorithm succeeds down to the KestenStigum bound, i.e., throughout the detectable regime for and the easily detectable regime for .
In Fig. 2 we compare the behavior of our BP algorithm on ER graphs and a network generated by the SBM in the detectable regime. Both graphs have the same size and average degree . For the ER graph (left) there are just two phases, separated by a transition at : the paramagnetic phase where BP converges to a factorized fixed point where every node is equally likely to be in every group, and the spin glass phase where replica symmetry is broken, and BP fails to converge. The convergence time diverges at the transition. Note that in the spin glass phase, the retrieval modularity returned by BP fluctuates wildly as BP jumps from one local optimum to another, and has little meaning. In any case BP assumes replica symmetry, which is incorrect in this phase.
In contrast, the SBM network in Fig. 2 (right) has strong community structure. In addition to the paramagnetic and spin glass phases, there is now a retrieval phase in a range of , where BP finds a retrieval state describing statistically significant community structure. The retrieval modularity jumps sharply at when we first enter this phase, and then increases gently to as increases; for comparison, the modularity of the planted partition is . When we enter the spin glass phase at , the retrieval modularity fluctuates as in the ER graph. The convergence time diverges at both phase transitions.
We can compute two of these transition points analytically by analyzing the linear stability of the factorized fixed point (see Methods). Stability against random perturbations gives
(3) 
and stability against correlated perturbations gives
(4) 
These cross at the KestenStigum bound, where . We do not currently have an analytic expression for .
In Fig. 3 (left) we show the phase diagram of our algorithm on SBM networks, including the paramagnetic, retrieval, and spin glass phases as a function of , with and . The boundary between the paramagnetic and retrieval phases is in excellent agreement with our expression (4). For , our algorithm finds a retrieval state for . On the right, we show the accuracy of the retrieval partition , defined as its overlap with the planted partition, i.e., the fraction of nodes labeled correctly.
We emphasize that is not the optimal value of , i.e., it is not on the Nishimori line iba1999 ; NishimoriBook01 ; Montanari08 . However, the optimal depends on the parameters of the SBM (see Appendix). Our claim is that setting in our algorithm succeeds throughout the easilydetectable regime, even when the parameters are unknown. In Fig. 3 (right) we compare our algorithm with that of decelleetalprl ; decelleetalpre
, which learns the SBM parameters using an expectationmaximization (EM) algorithm. Our algorithm provides nearly the same overlap, without the need for the EM loop.
Left: phase diagram for networks generated by the stochastic block model, showing the paramagnetic (P), retrieval (R), and spin glass (SG) phases. Blue circles with error bars denote experimental estimates of
, the boundary between the paramagnetic and retrieval phases, and the solid green line shows our theoretical expression (4). The spin glass instability occurs for (red dashdotted line) and is the detectability transition (black dashed line). Right: The overlap of the retrieval partition at (blue circles) and the partition obtained with the algorithm of decelleetalpre , which infers the parameters of the SBM with an additional EM learning algorithm. Each experiment is on the giant component of a network with , groups, and average degree . We average over random instances.i.2 Results on realworld networks and choosing the number of groups
We tested our algorithm on a number of realworld networks. As for networks generated by the SBM in the detectable regime, we find a retrieval phase between the paramagnetic and spin glass phases (see figure in Appendix). Rather than attempting to learn the optimal parameters or temperature for these networks, we simply set as defined in (3) where is the groundtruth number of groups (if known) and is the average degree. Again, this value of is not optimal, and varying may improve the algorithm’s performance; however, setting appears to work well in practice.
When the number of groups is not known, determining it is a classic modelselection problem. The maximum modularity typically grows with . In contrast, the retrieval modularity stops growing when exceeds the correct value, giving us a principled method of choosing (see Appendix). For those networks where is known, we found that this procedure agrees perfectly with the ground truth.
As shown in Table 1, our algorithm finds a retrieval state in all these networks, with high retrieval modularity and high overlap with the ground truth. For the Gnutella, Epinions and webGoogle networks, no ground truth is known; but in contrast with leskovec2009community , our algorithm finds significant largescale communities.
While most of these networks are assortative, one network in the table, the adjacency network of common adjectives and nouns in the novel David Copperfield PhysRevE.74.036104 , is disassortative, since nouns are more likely to be adjacent to adjectives than other nouns and vice versa. In this case, we found a retrieval state with negative modularity, and high overlap with the ground truth, by setting to .
Network  overlap  time (sec)  # iterations  
Zachary’s karate club  34  78  2  1.012  0.371  1  0.001  26 
Dolphin social network  62  159  2  0.948  0.395  0.887  0.001  33 
Books about US politics  105  441  3  0.948  0.521  0.829  0.002  23 
Word adjacencies  112  425  2  0.761  0.275  0.848  0.003  35 
Political blogs  1222  16714  2  0.387  0.426  0.948  0.043  18 
Gnutella  62586  147892  7  0.995  0.517  37.43  433  
Epinions  75888  405740  4  0.632  0.429  57.13  213  
WebGoogle  916428  4322051  5  0.676  0.724  2331  505 
i.3 Results on hierarchical clustering
Many networks appear to have hierarchical structure with communities and subcommunities on many scales clauset2004finding ; clauset2008hierarchical ; sales2007extracting ; PhysRevE.74.036104 ; peixoto2013hierarchical . We can look for such structures by working recursively: we determine the optimal number of groups, divide the network into subgraphs, and apply the algorithm to each one. We stop dividing when there is no retrieval state, indicating that the remaining subgraphs have no significant internal structure.
For networks generated by the SBM, each subgraph is an ER graph. Our algorithm finds no retrieval state in the subgraphs, so it stops after one level of divisioin. The same occurs in some small realworld networks, e.g. Zachary’s karate club. In some larger realworld networks, on the other hand, our algorithm repeatedly finds a retrieval state in the subgraphs, suggesting a deep hierarchical structure.
An example is the network of political blogs adamic2005political . Our algorithm first finds two large communities corresponding to liberals and conservatives, and agreeing with the groundtruth labels on of the nodes. But as shown in Fig. 4, it splits these into subcommunities, eventually finding a hierarchy levels deep with a total of subgroups (the shaded leaves of the tree in Fig. 4). We show the adjacency matrix with nodes ordered by this final partition on the right of Fig. 4, and the hierarchical structure is clearly visible. The modularity of the 2nd through 5th levels are , , , and respectively. This decreasing modularity may explain why the algorithm did not immediately split the network all the way down to the subcommunities.
A nested SBM was used to explore hierarchical structure in peixoto2013hierarchical , where the blog network was also reported to have hierarchical structure. Our results are slightly different, giving rather than subgroups, but the first levels of subdivision are similar.
i.4 Comparison with other algorithms
In this section we compare the performance of our algorithm with two popular algorithms: Louvain blondel2008fast and OSLOM Lancichinettiplosone . In particular, OSLOM tries to focus on statistically significant communities.
Louvain gives partitions with similar modularity as our algorithm, but with a much larger number of groups, particularly on large networks. For example, on the Gnutella and Epinions network leskovec2009community , our algorithm finds and groups with modularity and respectively, while the Louvain method finds and groups with modularity and respectively. Thus our algorithm finds largescale communities, with a modularity similar to the smaller communities found by Louvain. Of course, we emphasize that maximizing the modularity is not our goal: finding statistically significant communities is.
We show results on synthetic networks in Fig. 5. On the left, we apply Louvain, OSLOM, and our algorithm to SBM networks with . We compute the normalized mutual information (NMI) danon2005 between the inferred partition and the planted one. (We use the NMI rather than the overlap because the number of groups given by OSLOM and Louvain are very different from the planted partition.) For Louvain and OSLOM, the NMI drops off well below the detectability transition. On the right, we show the number of groups that each algorithm infers for an ER graph with . Our algorithm correctly chooses , recognizing that this network has no internal structure. The other algorithms overfit, inferring a number of communities that grows with . In the Appendix we report on experiments on benchmark networks with heavytailed degree distributions LFRbenchmark , with similar results.
Ii Discussion
We have presented a physicsbased method for finding statistically significant communities. Rather than using an explicit generative or graphical model, it uses a popular measure of community structure, namely the modularity. It does not attempt to maximize the modularity, which is both computationally difficult and prone to overfitting. Instead it estimates the marginals of the Gibbs distribution using a scalable BP algorithm derived from the cavity method (see next section), and defines the retrieval partition by assigning each node to its mostlikely community according to these marginals.
In essence, the algorithm looks for the consensus of many partitions with high modularity. When this consensus exists, it indicates statistically significant community structure, as opposed to random fluctuations. Moreover, by testing for the existence of this retrieval state, as opposed to a spin glass state where the algorithm fluctuates between many unrelated local optima, we can determine the correct number of groups, and decompose a network hierarchically.
We note that this algorithm is related to BP for the degreecorrected stochastic block model (DCSBM). Specifically, for a fixed , the modularity is linearly related to the loglikelihood of the DCSBM with particular parameters (see Appendix). However, our algorithm does not have to learn the parameters of the block model with an EM algorithm, or perform model selection between the stochastic block model and its degreecorrected variant yan2012model . To be clear,
is still a tunable parameter that can be optimized, but the heuristic value
appears to work well for a wide range of networks.In addition to the detectability transition in the SBM, another wellknown barrier to community detection is the resolution limit Fortunato2007 where communities become difficult to find when their size is or less. In the Appendix, we give some evidence that our hierarchical clustering algorithm overcomes this barrier. Namely, for the classic example of a ring of cliques, at the second level our algorithm divides the graph precisely into these cliques.
Another recent proposal for determining the number of groups is to use the number of real eigenvalues of the nonbacktracking matrix, outside the bulk of the spectrum
Krzakala24122013 . For some networks, such as the political blogs, this gives a larger number than the we found here; it may be that, in some sense, this method detects not just toplevel communities, but subcommunities deeper in the hierarchy. It would be interesting to perform a detailed comparison of the two methods.Our approach can be extended to generalizations of the modularity, where the graph is weighted, or where a parameter represents the relative importance of the expected number of internal edges reichardt2006statistical . Finally, it would be interesting to apply BP to other objective functions, such as normalized cut or conductance, devising Hamiltonians from them and considering the resulting Gibbs distributions.
Finally, we note that rather than running BP once and using the resulting marginals, we could use decimation MMBook to fix the labels of the most biased nodes, run BP again to update the marginals, and so on. This would increase the running time of the algorithm, but it may improve its performance. Another approach would be reinforcement MMBook , where we add external fields that point toward the likely configuration. We leave this for future work.
Iii Methods
iii.1 Defining statistical significance
As described above, an ER random graph has many partitions with high modularity. However, these partitions are nearly uncorrelated with each other. In the language of disordered materials, the landscape of partitions is glassy: while the optimal one might be unique, there are many others whose modularity is almost as high, but which have a large Hamming distance from the optimum and from each other. If we define a Gibbs distribution on the partitions, we encounter either a paramagnetic state where the marginals are uniform, or a spin glass with replica symmetry breaking where we jump between local optima. In either case, focusing on any one of these optima is simply overfitting.
For networks such as on the right of Fig. 1, in contrast, there are many highmodularity partitions that are correlated with each other, and with the ground truth. As a result, the landscape has a smooth valley surrounding the ground truth. At a suitable temperature, the Gibbs distribution is in a retrieval phase with both low energy (high modularity) and high entropy, giving it a lower free energy than the paramagnetic state, with its marginals biased towards the ground truth. When BP converges to a fixed point, it finds a (local) minimum of the Bethe free energy, approximating this lower free energy phase.
We propose the existence of this retrieval phase as a physicsbased definition of statistical significance. When it exists, the retrieval partition defined by the maximum marginals is an optimal prediction for which nodes belong to which groups.
The idea of using the free energy to separate real community structure from random noise, and using the Gibbs marginals to define a partition, also appeared in decelleetalprl ; decelleetalpre . However, that work is based on a specific generative model, namely the stochastic block model, and the energy is (minus) the loglikelihood of the observed network. In contrast, we avoid explicit generative models, and focus directly on the modularity as a measure of community structure.
iii.2 The cavity method and belief propagation
Our goal is to compute the marginal probability distribution that each node belongs to a given group and the free energy of the Gibbs distribution. We could do this using a Monte Carlo Markov Chain algorithm. However, to obtain marginals we would need many independent samples, and to obtain the free energy we would need to sample at many different temperatures. Thus MCMC is prohibitively slow for our purposes.
Instead, for sparse networks, we can use Belief Propagation Yedidia_etal_TR200122 , known in statistical physics as the cavity method MP01 . BP makes a conditional independence assumption, which is exact only on trees; however, in the regimes we will consider (the detectable regime of the stochastic block model, and typical realworld graphs), its estimates of the marginals are quite accurate. It also provides an estimate of the free energy, called the Bethe free energy, which is a function of one and twopoint marginals.
BP works with “messages” : these are estimates, sent from node to node , of the marginal probability that based on ’s interactions with nodes . The update equations for these messages are as follows:
(5) 
Here denotes the set of ’s neighbors, and denotes an external field acting on nodes in group , which we update after each BP iteration. We refer to the Appendix for detailed derivations of the BP update equations and Bethe free energy.
For groups and edges, each iteration of (13) takes time . If is fixed this is linear in the number of edges, and linear in the number of nodes when the network is sparse (i.e., when the average degree is constant). Moreover, these updates can be easily parallelized. Empirically, the number of iterations required to converge appears to depend very weakly on the network size, although in some cases it must grow at least logarithmically.
iii.3 The factorized solution and local stability
Observe that the factorized solution, , where each node is equally likely to be in each possible group, is always a fixed point of (13). If BP converges to this solution, we cannot label the nodes better than chance, and the retrieval modularity is zero. This is the paramagnetic state.
There are two other possibilities: BP fails to converge, or it converges to a nonfactorized fixed point, which we call the retrieval state. In the latter case, we can compute the marginals by
(6) 
and define the retrieval partition that assigns each node to its mostlikely community. This partition represents the consensus of the Gibbs distribution: it indicates that there are many highmodularity partitions that are correlated with each other. The retrieval modularity is then a good measure of the extent to which the network has statistically significant community structure.
On the other hand, if BP does not converge, this means that neither the factorized solution nor any other fixed point is locally stable; the spin glass susceptibility diverges, and replica symmetry is broken. In other words, the space of partitions breaks into an exponential number of clusters, and BP jumps from one to another. The retrieval partition obtained using the current marginals will change to a very different partition if we run BP a bit longer, or if we perturb the initial BP messages slightly. In the spin glass phase, we are free to define a retrieval modularity from the current marginals, but it fluctuates rapidly, and does not represent a consensus of many partitions.
The linear stability of the factorized solution can be characterized by computing the derivatives of messages with respect to each other at the factorized fixed point. Using (13), we find that where is the matrix
(7) 
Its largest eigenvalue (in magnitude) is
(8) 
On locally treelike graphs with Poisson degree distributions and average degree , the factorized fixed point is then unstable with respect to random noise whenever . This is also known as the de AlmeidaThouless local stability condition AlmeidaThouless78 , the KestenStigum bound kesten_stigum_1 ; kesten_stigum_2 , or the threshold for census or robust reconstruction MM06 ; janson2004robust . In our case, it shows that must exceed a critical given by (3). If the network has some other degree distribution but is otherwise random, (3) holds where is the average excess degree, i.e., the expected number of additional neighbors of the endpoint of a random edge.
If there is no statistically significant community structure, then BP has just two phases, the paramagnetic one and the spin glass: for it converges to the factorized fixed point, and for it doesn’t converge at all. On the other hand, if there are statistically significant communities, then BP converges to a retrieval state in the range . Typically and is in the retrieval phase, since even if the factorized fixed point is locally stable, BP can still converge to a retrieval state if its free energy is lower than that of paramagnetic solution. Thus we can test for statistically significant communities by running BP at . Note that our calculation of in (3) assumes that the network is random conditioned on its degree distribution; in principle could fall outside the retrieval phase for realworld networks. In that case, our heuristic method of setting fails, and it would be necessary to scan values of in the vicinity of for the retrieval state.
To estimate , we again consider the linear stability of BP around the factorized fixed point; but now we consider arbitrary perturbations, as opposed to random noise. Let be the matrix defined in (7). The matrix of derivatives of all
messages with respect to each other is a tensor product
, where is the nonbacktracking matrix Krzakala24122013 . The adaptive external field in the BP equations suppresses eigenvectors where every node is in the same community. As a result, the relevant eigenvalue is where is the largest eigenvalue of , and is the secondlargest eigenvalue of , and the factorized fixed point is unstable whenever . For networks generated by the SBM, we have Krzakala24122013(9) 
However, this assumes that the corresponding eigenvector of is correlated with the community structure, so that perturbing BP away from the factorized fixed point will lead to the retrieval state. This is true as long as is outside the bulk of ’s eigenvalues, which are confined to a disk of radius in the complex plane Krzakala24122013 ; if it is inside the bulk, then the community structure is washed out by isotropic eigenvectors and becomes hard to find. Thus the communities are detectable as long as . This is equivalent to , or equivalently . Thus the retrieval state exists all the way down to the KestenStigum transition where , , and . At that point, the relevant eigenvalue crosses into the bulk, and the retrieval phase disappears.
We note that the paramagnetic, retrieval, and spin glass states were also studied in hu2012phase , using a generalized Potts model and a heat bath MCMC algorithm. However, their Hamiltonian depends on a tunable cutsize parameter, rather than on a general measure of community structure such as the modularity. Moreover, it is difficult to obtain analytical results on phase transitions using MCMC algorithms, while the stability of BP fixed points is quite tractable.
iii.4 Defining the spin glass phase
While we have identified the spin glass phase with the nonconvergence of belief propagation, the true phase diagram is potentially more complicated. The spin glass phase is defined by the divergence of the spin glass susceptibility. If this phase appears continuously, then in sparse problems this is equivalent to the sensitivity of the BP messages to noise, i.e., whether it converges to a stable fixed point. However, if the spin glass phase appears discontinuously, it could be that BP converges even though the true susceptibility diverges (see e.g. Zdeborova2009 ).
We expect this to happen above the Nishimori line when the “hard but detectable” phase exists decelleetalpre , when there is a retrieval state with lower free energy than the factorized fixed point but with an exponentially small basin of attraction, so that BP starting with random messages fails to converge to the true minimum of the free energy. Detecting this spin glass phase would require us to go beyond the replicasymmetric BP equations used here to equations with onestep replica symmetry breaking MMBook . In the assortative case of the stochastic block model, the hardbutdetectable phase exists for . Happily, the corresponding range of parameters is quite narrow; nevertheless, more work on this needs to be done.
A C++ implementation can be found at code .
Acknowledgements.
We are grateful to Silvio Franz, Florent Krzakala, Mark Newman, Federico RicciTersenghi, Christophe Schulke, and Lenka Zdeborová for helpful discussions, and to Tiago de Paula Peixoto for drawing Fig. 4 (left) using his software at http://graphtool.skewed.de/. This work was supported by AFOSR and DARPA under grant FA95501210432.References
 (1) Von Luxburg U (2007) A tutorial on spectral clustering. Stat Comput 17:395.
 (2) Newman MEJ (2006) Finding community structure in networks using the eigenvectors of matrices. Phys Rev E 74:036104.
 (3) Krzakala F, Moore C, Mossel E, Neeman J, Sly A, Zdeborová L, Zhang P (2013) Spectral redemption in clustering sparse networks. Proc Natl Acad Sci USA 110:20935.
 (4) Hastings MB (2006) Community detection as an inference problem. Phys Rev E 74:035102.

(5)
Decelle A, Krzakala F, Moore C, Zdeborová L (2011) Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Phys Rev E 84:066106.
 (6) Decelle A, Krzakala F, Moore C, Zdeborová L (2011) Inference and phase transitions in the detection of modules in sparse networks. Phys Rev Lett 107:065701.
 (7) Karrer B, Newman MEJ (2011) Stochastic blockmodels and community structure in networks. Phys Rev E 83:016107.
 (8) Clauset A, Newman MEJ, Moore C (2004) Finding community structure in very large networks. Phys Rev E 70:066111.
 (9) Blondel VD, Guillaume JL, Lambiotte R, Lefebvre E (2008) Fast unfolding of communities in large networks. J Stat Mech 2008:P10008.
 (10) Rosvall M, Bergstrom CT (2008) Maps of random walks on complex networks reveal community structure. Proc Natl Acad Sci USA 105:1118.
 (11) Fortunato S (2010) Community detection in graphs. Physics Reports 486:75.
 (12) Newman MEJ, Girvan M (2004) Finding and evaluating community structure in networks. Phys Rev E 69:026113.
 (13) Newman MEJ (2004) Fast algorithm for detecting community structure in networks. Phys Rev E 69:066133.
 (14) Duch J, Arenas A (2005) Community detection in complex networks using extremal optimization. Phys Rev E 72:027104.
 (15) Guimera R, SalesPardo M, Amaral LAN (2004) Modularity from fluctuations in random graphs and complex networks. Phys Rev E 70:025101.
 (16) Reichardt J, Bornholdt S (2006) Statistical mechanics of community detection. Phys Rev E 74:016110.
 (17) Zdeborová L, Boettcher S (2010) A conjecture on the maximum cut and bisection width in random regular graphs. J Stat Mech 2010:P02020.
 (18) Sulc P, Zdeborová L (2010) Belief propagation for graph partitioning. J Phys A: Math Gen 43:B5003.
 (19) Good BH, de Montjoye YA, Clauset A (2010) Performance of modularity maximization in practical contexts. Phys Rev E 81:046106.
 (20) Lancichinetti A, Radicchi F, Ramasco J (2010) Statistical significance of communities in networks. Phys Rev E 81:046110.
 (21) Lancichinetti A, Radicchi F, Ramasco J, Fortunato S (2011) Finding statistically significant communities in networks. PloS One 6:e18961.
 (22) Wilson JD, Wang S, Mucha PJ, Bhamidi S, Nobel AB (2009) A testing based extraction algorithm for identifying significant communities in networks. Oxford University Press.

(23)
Iba Y (1999) The Nishimori line and Bayesian statistics. J Phys A: Math Gen 32:3875.
 (24) Clauset A, Moore C, Newman MEJ (2008) Hierarchical structure and the prediction of missing links in networks. Nature 453:98.
 (25) Lancichinetti A, Fortunato S (2012) Consensus clustering in complex networks. Nature Scientific Reports 2:336.
 (26) Mossel E, Neeman J, Sly A (2012) Stochastic block models and reconstruction. arXiv:1202.1499.
 (27) Massoulie L (2013) Community detection thresholds and the weak Ramanujan property. arXiv:1311.3085.
 (28) Mossel E, Neeman J, Sly A (2013) A proof of the block model threshold conjecture. arXiv:1311.4115.
 (29) Kesten H, Stigum BP (1966) A limit theorem for multidimensional GaltonWatson processes. Ann Math Stat 37:1211.
 (30) Kesten H, Stigum BP (1966) Additional limit theorems for indecomposable multidimensional GaltonWatson processes. Ann Math Stat 37:1463.
 (31) Nishimori H (2012) Statistical Physics of Spin Glasses and Information Processing. Oxford University Press.

(32)
Montanari A (2008) Estimating random variables from random sparse observations. European Transactions on Telecommunications 19:385.
 (33) Zachary WW (1977) An information flow model for conflict and fission in small groups. Journal of Anthropological Research :452–473.
 (34) Adamic LA, Glance N (2005) The political blogosphere and the 2004 US election: divided they blog. Proceedings of the 3rd Intl Workshop on Link Discovery 452–473.
 (35) Lusseau D, Schneider K, Boisseau OJ, Haase P, Slooten E, Dawson SM (2003) The bottlenose dolphin community of Doubtful Sound features a large proportion of longlasting associations. Behavioral Ecology and Sociobiology 54:396.
 (36) Krebs V Social Network Analysis software & services for organizations, communities, and their consultants, www.orgnet.com/. Accessed September 26, 2014.
 (37) Leskovec J, Lang KJ, Dasgupta A, Mahoney MW (2009) Community structure in large networks: natural cluster sizes and the absence of large welldefined clusters. Internet Math 6:29.
 (38) SalesPardo M, Guimera R, Moreira AA, Amaral LAN (2007) Extracting the hierarchical organization of complex systems. Proc Natl Acad Sci USA 104:15224.
 (39) Peixoto TP (2014) Hierarchical block structures and highresolution model selection in large networks. Phys Rev X 4:011047.
 (40) Danon L, DiazGuilera A, Duch J, Arenas A (2005) Comparing community structure identification. J Stat Mech 2005:P09008.
 (41) Lancichinetti A, Fortunato S, Radicchi F (2008) Benchmark graphs for testing community detection algorithms. Phys Rev E 78:046110.
 (42) Yan X, Jensen JE, Krzakala F, Moore C, Shalizi CR, Zdeborová L, Zhang P, Zhu Y (2014) Model selection for degreecorrected block models. J Stat Mech 2014:P05007.
 (43) Fortunato S, Barthelemy M (2007) Resolution limit in community detection. Proc Natl Acad Sci USA 104:36.

(44)
Yedidia J, Freeman W, Weiss Y (2003) Understanding belief propagation and its generalizations. Exploring Artificial Intelligence in the New Millennium (Morgan Kaufmann Publishers Inc., San Francisco).
 (45) Mézard M, Parisi G (2001) The Bethe lattice spin glass revisited. Eur Phys J B 20:217.
 (46) De Almeida J, Thouless D (1978) Stability of the SherringtonKirkpatrick solution of a spin glass model. J Phys A: Math Gen 11:983.
 (47) Mézard M, Montanari A (2006) Reconstruction on trees and spin glass transition. J Stat Phys 124:1317.
 (48) Janson S, Mossel E (2004) Robust reconstruction on trees is determined by the second eigenvalue. Ann Prob :2630–2649.
 (49) Hu D, Ronhovde P, Nussinov Z (2012) Phase transitions in random Potts systems and the community detection problem. Phil Mag 92:406.
 (50) Zdeborová L (2009) Statistical physics of hard optimization problems. Acta Phys Slov 59:169.
 (51) Mézard M, Montanari A (2009) Information, Physics, and Computation. Oxford University Press.
 (52) A C++ implementation of our algorithm can be found at http://panzhang.net.
Appendix A Belief Propagation equation and Bethe free energy
In this section we derive the BP update equations appearing in the main text. BP works with “messages” : these are estimates, sent from node to node , of the marginal probability that based on ’s interactions with nodes . If the Hamiltonian is , the update equations for these messages are as follows:
(10) 
Here is simply a normalization factor, and denotes the neighborhood of node . The BP estimate of the marginal probability is then
(11) 
which is the same as (A) except that we remove the condition . We can also estimate the twopoint marginals, and in particular, the probability that two neighboring points belong to the same group. If , the BP estimate of the probability that and is
(12) 
The update equations (A) involve messages: every node interacts with every other one, not just their neighbors. However, in the sparse case we can simplify the effect of nonneighbors, by replacing them with an external field as in decelleetalprl ; decelleetalpre . If and , we have
In that case, we can identify the messages that sends to its nonneighbors with its marginal . Then (A) simplifies to
(13) 
where
(14) 
denotes an external field acting on nodes in group , which we update after each BP iteration. Iterating (13) now has computational complexity , which is linear in the number of edges when is fixed.
The Bethe free energy of a BP fixed point is a function of the messages:
(15) 
where and are the normalization constants for the one and twopoint marginals appearing in (11) and (12). BP fixed points are also stationary points of the Bethe free energy Yedidia_etal_TR200122 .
Observe that the factorized solution, , where each node is equally likely to be in each possible group, is always a fixed point of the BP equations (13). Assuming it does not get stuck in a local minimum, BP converges to a retrieval state whenever its Bethe free energy is less than that of the factorized state. If the network has average degree , this is simply
In Fig. 6 we compare the free energy, convergence time, and retrieval modularity for networks generated by the stochastic block model at three different values of , alongside an ErdősRényi graph of the same average degree . For small enough , their free energies are all equal to , since they are all in the paramagnetic phase. For each value of , there is a critical at which the free energy splits off from the others, where makes a transition to a retrieval state with . The retrieval modularity jumps to a nonzero value, indicating community structure, and the convergence time diverges at the transition. For the ErdősRényi graph, the apparent modularity also jumps, but at it enters the spin glass phase rather than the retrieval phase: BP fails to converge and the retrieval modularity fluctuates, indicating partitions that are uncorrelated with each other.
Appendix B Relation with the degreecorrected stochastic block model
The degreecorrected stochastic block model (DCSBM) was introduced in karrernewman to overcome the fact that the SBM typically places lowdegree and highdegree vertices into different groups, since it expects the degree distribution within each group to be Poisson. The DCSBM’s parameters are the expected node degrees and a matrix of parameters . Given a partition , the number of edges between each pair
is Poissondistributed with mean
. In the simple graph case where if and otherwise, the loglikelihood of the network is then(16) 
If for and for , the likelihood can be written as
(17) 
Comparing with the definition of modularity, if we set and such that
(18) 
then the second term in (17) is . Since the first term in (17) does not depend on , we have
and the Gibbs distribution is exactly the Gibbs distribution of partitions in the DCSBM.
Thus, for any fixed , there are parameters of the DCSBM such that these distributions have the same free energy and the same ground state. Belief propagation on the DCSBM was described in yan2012model , and one can optimize the parameters through an expectationmaximization algorithm analogous to decelleetalpre ; decelleetalprl . However, our approach is different in several ways.

We define community structure directly in terms of a classic measure, the modularity, as opposed to the loglikelihood of a generative model.

Rather than having to fit the parameters of the DCSBM with an EM algorithm, we have a single temperature parameter . We can usually detect communities by setting as in main text; at worst, we just have to a scan a small region.

For realworld networks the retrieval modularity appears to be a good guide to the number of groups , while the free energy of the (DC)SBM continues to decrease for .

Our approach appears to work equally well for networks with Poisson degree distributions (generated by the SBM) and those with heavytailed degree distributions, such as the LFR benchmark LFRbenchmark and the network of political blogs, where the DCSBM does much better karrernewman . In particular, we have no need to do model selection between SBM and DCSBM, as was done using the Bethe free energy in yan2012model .
Appendix C The Nishimori line and the optimal temperature
When data is produced by an underlying generative model, inference of the latent parameters can be done optimally along the Nishimori line iba1999 ; NishimoriBook01 , where the Gibbs distribution is exactly the posterior distribution of the latent parameters (in this case the group labels or partitions). If the network is generated by the DCSBM, then (18) gives a that corresponds to the correct parameters at Nishimori line. Determining the parameters, and therefore , could be done with an EM algorithm as in decelleetalprl ; decelleetalpre , but our goal is to avoid this additional learning step. Moreover, if the network is not actually generated by the DCSBM, there is a priori no value of that corresponds to the Nishimori line, and no way to determine the optimal without access to the ground truth.
However, for synthetic networks generated by the SBM, we can construct an approximate Nishimori line by omitting the difference between the SBM and the DCSBM, by assuming that the expected degrees are actually the same. This gives
In Fig. 7 we show the phase diagram from the main text with this approximate Nishimori line added. It passes through the critical point (one can check analytically that ) and that it avoids the spinglass phase, passing directly from the paramagnetic phase to the retrieval phase. This recovers the fact that replica symmetry breaking cannot occur on the Nishimori line Montanari08 .
Appendix D Choosing the number of groups
Choosing the number of groups in a network is a classic model selection problem. Setting by maximizing the modularity is a widelyused heuristic in the network literature; however, as we have already seen, it is prone to overfitting. For example, the maximum modularity for an ErdősRényi graph is an increasing function of , while the correct model has . Similarly, in the stochastic block model the likelihood increases, or the ground state energy decreases, until every node is assigned to its own group.
One approach decelleetalprl ; decelleetalpre is to use the free energy rather than the ground state energy. In essence, the entropic term penalizes overfitting, and gives us the total likelihood of the model summed over all partitions, as opposed to the likelihood of the best partition. This approach works well on synthetic graphs: the free energy decreases until we reach the correct number of groups, after which it stays roughly constant. However, on realworld networks the free energy continues to decrease with , for example as shown in Fig. 8 of decelleetalpre . Thus, for networks not generated by the SBM, it is not clear that this method works.
Here we propose to use the retrieval modularity as a criterion for choosing . Namely, we claim that increases with until we reach the correct value . For , either stays the same, or the retrieval phase disappears and we enter the spin glass phase. In Fig. 8 we plot and BP convergence time for the karate club network with different values of . With , i.e., the groundtruth number of groups, the retrieval phase is very large. For larger , the retrieval phase becomes narrower, and does not increase. Note the similarity with Fig. 2 (right) in the main text.
In Fig. 9, we plot for different values of as a function of for three networks with known community structure: a synthetic network generated by the SBM with , the karate club with zachary1977information , and a network of political books with polbooks . In each case, stops growing at , and is nearly independent of throughout the retrieval phase. (To deal with fluctuations, in practice we do not increase q unless the retrieval modularity increases by at least some threshold value.) Thus our method gives the correct number of communities, rather than overfitting.
Note that here refers to the top level of organization in the network. In the main text, we discuss using our approach to recursively divide communities into subcommunities. In that case, we use this procedure to determine the number of subcommunities we should split the network into at each stage, and stop splitting when we reach communities with .
Appendix E Additional comparisons with Louvain and OSLOM
In Fig. 10 we show comparisons between our BP algorithm, Louvain blondel2008fast , and OSLOM Lancichinettiplosone on networks with powerlaw degree distributions. On the left, the graphs are generated by the LFR benchmark process LFRbenchmark . We show the normalized mutual information danon2005 as a function of the mixing parameter . As for the SBM graphs shown in the main text, there is a parameter range where BP achieves a higher NMI than the other algorithms. On the right, we show results for a network with no community structure, where the degree distribution follows a power law with exponent . While BP correctly chooses as the number of groups, the other algorithms overfit, finding a number of communities that grows with the network size. These results are similar to those shown in Fig. 5 of the main text.
Appendix F The resolution limit
In this section we describe results of our algorithm on the ringofcliques network, which is the standard example of the resolution limit Fortunato2007 . This network has size ; it consists of cliques, each of which is composed of nodes, and which are connected to the neighboring cliques by a single link. Thus the intuitively correct partition of the network puts each clique into one group. However, when is sufficiently small compared to , maximizing the modularity forces us to combine multiple cliques Fortunato2007 . For example, if and , the correct partition with groups has modularity , while the division with groups of cliques each has modularity . As a consequence, maximizing the modularity fails to divide the network correctly into the cliques.
In Fig. 11 we plot the dendrogram obtained by our hierarchical clustering algorithm starting from different initial conditions (from top to bottom). All three dendrograms have levels below the root. The first split creates groups consisting of multiple cliques, but the second split correctly assigns each clique to its own group. At that point the algorithm concludes that the cliques have no internal structure, and it stops subdividing. This suggests that our hierarchical clustering algorithm may be able to avoid the resolution limit.