Improved Graph Clustering

10/11/2012 ∙ by Yudong Chen, et al. ∙ The University of Texas at Austin 0

Graph clustering involves the task of dividing nodes into clusters, so that the edge density is higher within clusters as opposed to across clusters. A natural, classic and popular statistical setting for evaluating solutions to this problem is the stochastic block model, also referred to as the planted partition model. In this paper we present a new algorithm--a convexified version of Maximum Likelihood--for graph clustering. We show that, in the classic stochastic block model setting, it outperforms existing methods by polynomial factors when the cluster size is allowed to have general scalings. In fact, it is within logarithmic factors of known lower bounds for spectral methods, and there is evidence suggesting that no polynomial time algorithm would do significantly better. We then show that this guarantee carries over to a more general extension of the stochastic block model. Our method can handle the settings of semi-random graphs, heterogeneous degree distributions, unequal cluster sizes, unaffiliated nodes, partially observed graphs and planted clique/coloring etc. In particular, our results provide the best exact recovery guarantees to date for the planted partition, planted k-disjoint-cliques and planted noisy coloring models with general cluster sizes; in other settings, we match the best existing results up to logarithmic factors.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

This paper proposes a new algorithm for the following task: given an undirected unweighted graph, assign the nodes into disjoint clusters so that the density of edges within clusters is higher than the edges across clusters. Clustering arises in applications such as a community detection, user profiling, link prediction, collaborative filtering etc. In these applications, one is often given as input a set of similarity relationships (either “1” or “0”) and the goal is to identify groups of similar objects. For example, given the friendship relations on Facebook, one would like to detect tightly connected communities, which is useful for subsequent tasks like customized recommendation and advertisement.

Graphs in modern applications have several characteristics that complicate graph clustering:

  • Small density gap: the edge density across clusters is only a small additive or multiplicative factor different from within clusters;

  • Sparsity: the graph is overall very sparse even within clusters;

  • Outliers: there may exist a large number of nodes that do not belong to any clusters and are loosely connected to the rest of the graph;

  • High dimensionality: the number of clusters may grow unbounded as a function of the number of nodes , which means the sizes of the clusters can be vanishingly small compared to ;

  • Heterogeneity:

    the cluster sizes, node degrees and edge densities may be non-uniform; there may even exist edges that are not well-modeled by probabilistic distributions as well as hierarchical cluster structures.

Various large modern datasets and graphs have such characteristics [1, 2]; examples include the web graph, social graphs of various social networks etc. As has been well-recognized, these characteristics make clustering more difficult. When the difference between in-cluster and across-cluster edge densities is small, the clustering structure is less significant and thus harder to detect. Sparsity further reduces the amount of information and makes the problem noisier. In the high dimensional regime, there are many small clusters, which are easy to lose in the noise. Heterogeneous and non-random structures in the graphs foil many algorithms that otherwise perform well. Finally, the existence of hierarchical structures and outliers renders many existing algorithms and theoretical results inapplicable, as they fix the number of clusters a priori and force each node to be assigned to a cluster. It is desirable to design an algorithm that can handle all these issues in a principled manner.

I-a Our Contributions

Our algorithmic contribution

is a new method for unweighted graph clustering. It is motivated by the maximum-likelihood estimator for the classical Stochastic Blockmodel 

[3] (a.k.a. the Planted Partition Model [4]) for random clustered graphs. In particular, we show that this maximum-likelihood estimator can be written as a linear objective over combinatorial constraints; our algorithm is a convex relaxation of these constraints, yielding a convex program overall. While this is the motivation, it performs well – both in theory and practice – in settings that are not just the standard stochastic blockmodel.

Our main analytical result in this paper is theoretical guarantees on its performance; we study it in a semi-random generalized stochastic blockmodel. This model generalizes not only the standard stochastic blockmodel and planted partition model, but many other classical planted models including the planted -clique model [5, 6], the planted coloring model [7, 8] and their semi-random variants [9, 10, 11]. Our main result gives the condition (as a function of the in/cross-cluster edge densities and , density gap , cluster sizes and numbers of inliers/outliers and ) under which our algorithm is guaranteed to recover the ground-truth clustering. When , the condition reads

here all the parameters are allowed to scale with , the total number of nodes. An analogue result holds for .

While the planted and stochastic block models have a rich literature, this single result shows that our algorithm outperforms every existing method for the standard planted partition/-clique/noisy-coloring models, and matches them (up to at most logarithmic factors) in all other cases, in the sense that our algorithm succeeds for a larger range of the parameters. In fact, there is evidence indicating that we are close to the boundary at which any polynomial-time algorithm can be expected to work. Moreover, the proof for our main theorem is relatively simple, relying only on standard concentration results. Our simulation study supports our theoretic finding, that the proposed method is effective in clustering noisy graphs and outperforms existing methods.

The rest of the paper is organized as follows: Section I-B provides an overview of related work; Section II presents our algorithm; Section III describes the Semi-Random Generalized Stochastic Blockmodel, which is a generalization of the standard stochastic blockmodel, one that allows the modeling of the issues mentioned above; Section IV presents the main results – a performance analysis of our algorithm for the semi-random generalized stochastic blockmodel and a detailed comparison to the existing literature on this problem; Section V provides simulation results; finally, the proof of our theoretic results is given in Sections VI to IX.

I-B Related Work

The general field of clustering, or even graph clustering, is too vast for a detailed survey here; we focus on the most related threads, and therein too primarily on work which provides analytical guarantees on the resulting algorithms.

Stochastic block models: Also called “planted models” [3, 4]

, these are arguably the most natural random clustered graph models. In the simplest or standard setting, nodes are partitioned into disjoint equal-sized subsets (called the true clusters), and then edges are generated independently and at random, with the probability

of an edge between two nodes in the same cluster higher than the probability when the two nodes are in different clusters. The algorithmic clustering task in this setting is to recover the true clusters given the graph. The parameters and the size of the smallest cluster typically govern whether an algorithm can do this clustering, or not.

There is now a long line of analytical work on stochastic block models; we focus on methods that allow for exact recovery (i.e., every node is correctly classified), and summarize the conditions required by known methods in Table 

I. As can be seen, we improve over all existing methods by polynomial factors. In addition, and as opposed to several of these methods, we can handle outliers, heterogeneity, hierarchy in clustering etc. A complimentary line of work has investigated lower bounds in this setting; i.e., for what values/scalings of and it is not possible (either for any algorithm, or for any polynomial time algorithm) to recover the underlying true clusters [12, 13, 14]. We discuss these two lines of work in more details in the main results section.

00footnotetext: To facilitate direct comparison, this table specializes some of the results to the case where every underlying partition (i.e. cluster) is of the same size , and the in/cross-cluster edge probabilities are uniformly and . Some of the algorithms above need this assumption, and some – like ours – do not. Paper Cluster size Density gap Sparsity Boppana (1987) [15] 111The soft notation ignores log factors. Jerrum et al. (1998) [16] Condon et al. (2001) [3] Carson et al. (2001) [17] Feige et al. (2001) [10] McSherry (2001) [5] Bollobas (2004) [9] Giesen et al. (2005) [18] Shamir (2007) [19] Coja-Oghlan (2010) [20] Rohe et al. (2011) [21] Oymak et al. (2011) [22] Chaudhuri et al. (2012) [12] Ames (2012) [23] Our result
TABLE I: Comparison with literature for the standard Stochastic Blockmodel

Convex methods for matrix decomposition: Our method is related to recent literature on the recovery of low-rank matrices using convex optimization, and in particular the recovery of such matrices from “sparse” perturbations (i.e., where a fraction of the elements of the low-rank matrix are possibly arbitrarily modified, while others are untouched). Sparse and low-rank matrix decomposition using convex optimization was initiated by [24, 25]; follow-up works [26, 27] have the current state-of-the-art guarantees on this problem, and [28] applies it directly to graph clustering.

The method in this paper is Maximum Likelihood, but it can also be viewed as a weighted version of sparse and low-rank matrix decomposition, with different elements of the sparse part penalized differently, based on the given input graph. There is currently no literature or analysis of weighted matrix decomposition; in that sense, while our weights have a natural motivation in our setting, our recovery results are likely to have broader implications, for example robust versions of PCA when not all errors are created equal, but have a corresponding prior. Moreover, our result on lower-bounds immediately implies a tight necessary condition for the standard matrix decomposition problem.

Ii Algorithm

We now describe our algorithm; as mentioned, it is a convex relaxation of Maximum Likelihood (ML) as applied to the standard stochastic blockmodel. So, in what follows, we first develop notation and the exact ML estimator, and then its relaxation.

ML for the standard stochastic blockmodel: Recall that in the standard stochastic blockmodel nodes are partitioned into disjoint clusters, and edges in the graph are chosen independently; the probability of an edge between a pair of nodes in the same cluster is , and for a pair of nodes in different clusters it is . Given the graph, the task is to find the underlying clusters that generated it. To write down the ML estimator for this, let us represent any candidate partition by a corresponding cluster matrix where if and only if nodes and are in the same cluster, and otherwise222We adopt the convention that for any node that belongs to a cluster.. Any thus needs to have a block-diagonal structure, with each block being all ’s.

A vanilla ML estimator then involves optimizing a likelihood subject to the combinatorial constraint that the search space is the cluster matrices. Let be the observed adjacency matrix of the graph333We assume for all .; then, the log likelihood function of given is

We notice that this can be written, via a re-arrangement of terms, as

(1)

here collects the terms that are independent of . ML estimator would be maximizing the above expression subject to being a cluster matrix. While the objective is a linear function of , this optimization problem is combinatorial due to the requirement that be a cluster matrix (i.e., block-diagonal with each block being all-ones), and is intractable in general.

Our algorithm: We obtain a convex and tractable algorithm by replacing the constraint “ is a cluster matrix” with (i) constraints for all elements , and (ii) a nuclear norm444

The nuclear norm of a matrix is the sum of its singular values.

regularizer in the objective. The latter encourages to be low-rank, and is based on the well-established insight that a cluster matrix has low rank – in particular, its rank equals to the number of clusters. (We discuss other related relaxations after we present our algorithm.)

Also notice that the likelihood expression (1) is linear in and only the ratio of the two coefficients and is important. We thus introduce a parameter which allows us to choose any ratio. This has the advantage that instead of knowing both and , we only need to choose one number (which should be between and ; we remark on how to choose later). This leads to the following convex formulation:

(2)
s.t. (3)

where the weights and are given by

(4)

Here the factor balances the contributions of the nuclear norm and the likelihood, and the specific forms of and are derived from our analysis. The optimization problem (2)–(3) is convex and can be cast as a Semidefinite Program (SDP) [24, 29]. More importantly, it can be solved using efficient first-order methods for large graphs (see Section V-A).

Our algorithm is given as Algorithm 1. Depending on the given and the choice of , the optimal solution may or may not be a cluster matrix. Checking if a given is a cluster matrix can be done easily, e.g., via an SVD, which will also reveal cluster memberships if it is a cluster matrix. If it is not, any one of several rounding/aggregation ideas (e.g., the one in [30]) can be used empirically; we do not delve into this approach in this paper, and simply output failure. In Section IV we provides sufficient conditions under which is a cluster matrix, with no rounding required.

  Input: ,
  Solve program (2)–(3) with weights (4). Let be an optimal solution.
  if  is a cluster matrix then
     Output cluster memberships obtained from .
  else
     Output “Failure”.
  end if
Algorithm 1 Convex Clustering

Ii-a Remarks about the Algorithm

Note that while we derive our algorithm from the standard stochastic blockmodel, our analytical results hold in a much more general setting. In practice, one could execute the algorithm (with appropriate choice of , and hence and ) on any given graph.

Tighter relaxations: The formulation (2)–(3) is not the only way to relax the non-convex ML estimator. Instead of the nuclear norm regularizer, a hard constraint may be used. One may further replace this constraint with the positive semidefinite constraint and the linear constraints , both satisfied by any cluster matrix555 The constraints are satisfied when there is no outlier.. It is not hard to check that these modifications lead to convex relaxations with smaller feasible sets, so any performance guarantee for our formulation (2)–(3) also applies to these alternative formulations. We choose not to use these potentially tighter relaxations based on the following theoretical and practical considerations: a) These formulations do not work well when the numbers of outliers and clusters are unknown. b) We do not obtain better theoretical guarantees with them. In fact, the work [30] considers these tighter constraints but their exact recovery guarantees are improved by ours. Moreover, as we argue in the next section, our guarantees are likely to be order-wise optimal and thus any alternative convex formulations are unlikely provide significant improvements in a scaling sense. c) Our simpler formulation facilitates efficient solution for large graphs via first-order methods; we describe one such method in Section V-A.

Choice of : Our algorithm requires an extraneous input . For the standard planted -clique problem [6, 5] (with cliques planted in a random graph ), one can use (see Section IV-C2). For the standard stochastic blockmodel (with nodes partitioned into equal-size clusters and edge probabilities being uniformly and inside and across clusters), the value of can be easily computed from data (see Section IV-D). In these cases, our algorithm has no tuning parameters whatsoever and does not require knowledge of the number or sizes of the clusters. For the general setting, should be chosen to lie between and , which now represent the lower/upper bounds for the in/cross-cluster edge densities. As such, can be interpreted as the resolution of the clustering algorithm. To see this, suppose the clusters have a hierarchical structure, where each big cluster is partitioned into smaller sub-clusters with higher edge densities inside. In this case, it is a priori not clear that which level of clusters, the larger ones or the smaller ones, should be recovered. This ambiguity is resolved by specifying : our algorithm recovers those clusters with in-cluster edge density higher than and cross-cluster density lower than . With a larger , the algorithm operates at a higher resolution and detects small clusters with high density. By varying , our algorithm can be turned into a method for multi-resolution clustering [1] which explores all levels of the cluster hierarchy. We leave this to future work. Importantly, the above example shows that it is generally impossible to uniquely determine the value of from data.

Iii The Generalized Stochastic Blockmodel

While our algorithm above is derived as a relaxation of ML for the standard stochastic blockmodel, we establish performance guarantees in a much more general setting, which is defined by six parameters , , , , and ; it is described below.

Definition 1 (Generalized Stochastic Blockmodel (GSBM)).

If (, resp.), consider a random graph generated as follows: The nodes are divided into two sets and . The nodes in are further partitioned into disjoint sets, which we will refer to as the “true” clusters. Let be the minimum size of a true cluster. For every pair of nodes that belong to the same true cluster, edge is present in the graph with probability that is at least (at most, resp.) , while for every pair where the nodes are in different clusters the edge is present with probability at most (at least, resp.) . The other nodes in are not in any cluster (we will call them outliers); for each and , there is an edge between the pair with probability at most (at least, resp.) .

Definition 2 (Semi-random GSBM).

On a graph generated from GSBM with (, resp.), an adversary is allowed to arbitrarily (a) add (remove, resp.) edges between nodes in the same true cluster, and (b) remove (add, resp.) edges between pairs of nodes if they are in different clusters, or if at least one of them is an outlier in .

The objective is to find the underlying true clusters, given the graph generated from the semi-random GSBM.

The standard stochastic blockmodel/planted partition model is a special case of GSBM with , all cluster sizes equal to , and all in-cluster (cross-cluster, resp.) probabilities equal to (, resp.). GSBM generalizes the standard models as it allows for heterogeneity in the graph:

  • and are lower and upper bounds, instead of exact edge probabilities;

  • is also a lower bound, so clusters can have different sizes;

  • outliers (nodes not in any cluster) are allowed.

GSBM removes many restrictions in standard planted models and better models practical graphs.

The semi-random GSBM allows for further modeling power. It blends the worst case models, which are often overly pessimistic,666For example, the minimum graph bisection problem is NP-hard. and the purely random graphs, which are extremely unstructured and have very special properties usually not possessed by real-world graphs [31]. This semi-random framework has been used and studied extensively in the computer science literature as a better model for real-world networks [9, 10, 11]. At first glance, the adversary seems to make the problem easier by adding in-cluster edges and removing cross-cluster edges when . This is not necessarily the case. The adversary can significantly change some statistical properties of the random graph (e.g., alter spectral structure and node degrees, and create local optima by adding dense spots [10]), and foil algorithms that over-exploit such properties. For example, some spectral algorithms that work well on random models are proved to fail in the semi-random setting [8]. An algorithm that is robust against an adversary is more likely to work well on real-world graphs. As shown later, our algorithm processes this desired property.

Iii-a Special Cases

GSBM recovers as special cases many classical and widely studied models for clustered graphs, by considering different values for the parameters , , , , and . We classify these models into two categories based on the relation between and .

  1. : GSBM with models homophily, the tendency that individuals belonging to the same community tend to connect more than those in different communities. Special cases include:

    • Planted Clique [32]: , (so ) and ;

    • Planted -Clique [5]: and ;

    • Stochastic Blockmodel/Planted Partition [4, 3]: , with all cluster sizes equal to .

  2. : GSBM with models heterophily. Special cases include:

    • Planted Coloring [10]: , , and ;

    • Planted -Cut/noisy coloring [9, 13]: , , and .

In the next two sections, we describe our algorithm and provide performance guarantees under the semi-random GSBM. This implies guarantees for all the special cases above. We provide a detailed comparison with literature after presenting our results.

Iv Main Results: Performance Guarantees

In this section we provide analytical performance guarantees for our algorithm under the semi-random GSBM. We provide a unified theorem, and then discuss its consequences for various special cases, and compare with literature. We also discuss how to estimate the parameter in the special case of the standard stochastic blockmodel. We shall first consider the case with . The case is a direct consequence and is discussed in Section IV-C3. All proofs are postponed to Sections VI to IX.

Iv-a A Monotone Lemma

Our optimization-based algorithm has a nice monotone property: adding/removing edges “aligned with” the optimal (as is done by the adversary) cannot result in a different optimum. This is summarized in the following lemma.

Lemma 1.

Suppose and is the unique optimum of (2)–(3) for a given A. If now we arbitrarily change some edges of to obtain , by (a) choosing some edges such that but , and making , and (b) choosing some edges where but , and making Then, is also the unique optimum of (2)–(3) with as the input.

The lemma shows that our algorithm is inherently robust under the semi-random model. In particular, the algorithm succeeds in recovering the true clusters on the semi-random GSBM as long as it succeeds on GSBM with the same parameters. In the sequel, we therefore focus solely on GSBM, with the understanding that any guarantee for it immediately implies a guarantee for the semi-random variant.

Iv-B Main Theorem

Let be the matrix corresponding to the true clusters in GSBM, i.e., if and only if and they are in the same true cluster, and 0 otherwise. The theorem below establishes conditions under which our algorithm, specifically the convex program (2)–(3), yields this as the unique optimum (without any further need for rounding etc.) with high probability (w.h.p.). Throughout the paper, with high probability means with probability at least for some universal absolute constant .

Theorem 1.

Suppose the graph is generated according to GSBM with . If in (4) is chosen to satisfy

(5)

then is the unique optimal solution to the convex program (2)–(3) w.h.p. provided

(6)

where is an absolute constant independent of and .

Our theorem quantifies the tradeoff between the four parameters governing the hardness of GSBM– the minimum in-cluster edge density , the maximum across-cluster edge density , the minimum cluster size and the number of outliers – required for our algorithm to succeed, i.e., to recover the underlying true clustering without any error. Note that we can handle any values of and as long as they satisfy the condition in the theorem; in particular, they are allowed to scale with . Interestingly, the theorem does not have an explicit dependence on the number of clusters except via the relation .

We now discuss the tightness of Theorem 1 in terms of these parameters. When , we have a near-matching converse result.

Theorem 2.

Suppose all clusters have equal size , and the in-cluster (cross-cluster, resp.) edge probabilities are uniformly (, resp.), with and . Under GSBM with and sufficiently large, for any algorithm to correctly recover the clusters with probability at least , we must have

is an absolute constant.

This theorem gives a necessary condition for any algorithm to succeed regardless of its computational complexity. It shows that Theorem 1 is optimal up to logarithmic factors for all values of and when .

Remark.

In the case with , identification of the clusters is equivalent to recovering the rank- matrix 777The incoherence parameter [25, 24] of is if and the clusters have equal size. and the sparse matrix from their sum , where is the fraction of randomly-located non-zero entries in . Therefore, Theorem 2 implies a necessary condition for the standard low-rank-plus-sparse problem [24, 25]: we need . This shows that the result in [26] is tight up to logarithmic factors when

For smaller values of , notice that Theorem 1 requires to be , since the left hand side of (6) is less than . This lower-bound is achieved when and are both constants independent of and . There are reasons to believe that this requirement is unlikely to be improvable using polynomial-time algorithms. Indeed, the special case with and corresponds to the classical planted clique problem [32]; finding a clique of size is widely believed to be computationally hard even on average and has been used as a hard problem for cryptographic applications [33, 34].

For other values of and , no general and rigorous converse result exists. However, there are evidences suggesting that no other polynomial-time algorithm is likely to have better guarantees than our result (6). The authors of [13] show, using non-rigorous but deep arguments from statistical physics, that recovering the clustering is impossible in polynomial time if . Moreover, the work in [14] shows that a large class of spectral algorithms fail under the same condition. In view of these results, it is possible that our algorithm is optimal w.r.t. all polynomial-time algorithms for all values of , and .

Several further remarks regarding Theorem 1 are in order.

  • A nice feature of our result is that we only need to be large only as compared to ; several other existing results (see Table  I) require a lower bound (as a function of and ) on itself. When is , we allow and to be as small as .

  • The number of clusters is allowed to grow rapidly with ; this is called the high-dimensional setting [21]. In particular, our algorithm can recover as many as equal size clusters. Any algorithm with a better scaling would recover cliques of size , an unlikely task in polynomial time in light of the hardness of the planted clique problem discussed above.

  • The number of outliers can also be large, as many as , which is attained when are and . In other words, almost all the nodes can be outliers, and this is true even when there are multiple clusters that are not cliques (i.e., ).

  • Not all existing methods can handle non-uniform edge probabilities and node degrees, which often require special treatment (see [12]). This issue is addressed seamlessly by our method by definition of GSBM.

In the following sub-section, we discuss various planted problems to which Theorem 1 applies and compare with existing work. Our results match the best existing results in all cases (up to logarithm factors), and in many important settings lead to order-wise stronger guarantees.

Iv-C Consequences and Comparison with Literature

Iv-C1 Standard Stochastic Blockmodel (a.k.a. Planted Partition Model)

This model assumes that all clusters have the same size with no isolated nodes (), and the in-cluster and across-cluster edge probabilities are uniformly and , respectively, with . We compare our result to past approaches and theoretical results in Table I. Our result has the scaling and , which improves on all existing results by polynomial factors. This means that we can handle much noisier and sparser graphs, especially when the number of clusters is growing. A recent paper [35], which appeared after the publication of the conference version [36]

of this manuscript, proposes a tensor approach for graph clustering. Our guarantee is a few logarithmic factors better than their results applied to the standard stochastic blockmodel.

Iv-C2 Planted -Clique Problem

Here the task is to find a set of disjoint cliques, each of size at least , that have been planted in an Erdos-Renyi random graphs . Setting in Theorem 1, we obtain the following guarantee for the planted -clique problem.

Corollary 1.

For the planted -clique problem, the formulation (2)-(4) with chosen according to Theorem 1 finds the hidden cliques w.h.p. provided

where is an absolute constant.

In the regime where is allowed to scale with and bounded away from zero, the best previous results are given in [5] () and in [23] (). Corollary 1 is stronger than both of them for large .

Iv-C3 The Heterophily Case ()

Given a graph generated from the semi-random GSBM with intra/inter-cluster densities , we can run our algorithm to the graph , where is the all-one matrix. Note that can be considered as generated from GSBM with intra/inter-cluster densities and , where . With this reduction, Lemma 1 and Theorem 1 immediately yield the following guarantee.

Corollary 2.

Under the semi-random GSBM with , the formulation (2)-(4) applied to with obeying

finds the true clustering w.h.p. provided

where is an absolute constant.

This corollary immediately yields guarantees for the planted coloring problem [7] and the planted -cut [9] (a.k.a. planted noisy coloring [13]) problem. We are not aware of any exiting work that explicitly considers the mirrored GSBM in its general form (, , and with potential non-random edges). However, since any guarantee for GSBM with implies a guarantee for GSBM with , Table I provides a comparison with existing work when and the edge probabilities and cluster sizes are uniform. Again our guarantee outperforms all existing ones.

Iv-C4 Planted Coloring Problems

A special case of the above problem is the planted coloring problem, where and . The best existing result is given by various algorithms (e.g., [7, 8]). By Corollary 2, our algorithm succeeds provided . We match the best existing algorithms for , and are off by a few log factors for larger .

Iv-D Estimating in Special Cases

We have argued that specifying in a completely data-driven way is ill-posed for the general GSBM, e.g., when the clusters are hierarchical. However, for special cases this can be done reliably with strong guarantees. Consider the standard stochastic blockmodel, where all clusters have the same size , the edge probabilities are uniform (i.e., equal to within clusters and across clusters, with ), and there are no outliers () or non-random edges. Observe that is a matrix with blocks of and ’s888Recall that we use the convention ., which is equal to the Kronecker product of a all-one matrix and an circulant matrix with entries equal to on the diagonal and

elsewhere. The all-one matrix has one non-zero eigenvalue

, and the circulant matrix has eigenvalues and with multiplicities and , respectively. The eigenvalues of is the product of these two matrices. It follows that the first eigenvalue of is with multiplicity , the second eigenvalue is with multiplicity , and the third eigenvalue is with multiplicities [18]. This motivates us to use the eigenvalues of the observed matrix to estimate , and ; see Algorithm 2.

  1. Compute and sort the eigenvalues of , denoted as .

  2. Let . Set .

  3. Set

  4. Set .

Algorithm 2 Estimation of from data

The following theorem guarantees that the estimation errors are small.

Theorem 3.

Under the standard stochastic blockmodel and the condition (6) in Theorem 1, the parameters estimated in Algorithm 2 satisfy the following with high probability, where is an absolute positive constant:

In particular, the estimated satisfies the condition (5) in Theorem 1. The above theorem also ensures that Algorithm 2 is a consistent estimator of the parameters and when condition (6) is satisfied, a result of independent interest. Combining Theorem 1 and Theorem 3, we obtain a complete algorithm that is guaranteed to find the clusters under the standard stochastic blockmodel obeying condition (6), without knowledge of any generative parameters of the underlying model.

V Empirical Results

V-a Implementation Issues

The convex program (2)–(3) can be solved using a general purpose SDP solver, but this method does not scale well to problems with more than a few hundred nodes. To facilitate fast and efficient solution, we propose to use a family of first-order algorithms called Augmented Lagrange Multiplier (ALM) methods. Note that the program (2)–(3) can be rewritten as

(7)
s.t

where , the matrix satisfies if and otherwise, and denotes the element-wise matrix product. This problem can be recognized as a generalization of the standard convex formulation of the low-rank and sparse matrix decomposition problem [25, 24], of which the numerical solution has been well studied. We adapt the ALM method in [37] to the above problem, given in Algorithm 3.

  Input: .
  Initialize: ; ;; ; ; , .
  while not converge do
     .
     .
     For all , .
     .
     .
     , .
  end while
  Return .
Algorithm 3 ALM for Minimizing Nuclear Norm plus Weighted Norm

Here is the element-wise weighted soft-thresholding operator, defined as

In other words, it shrinks each entry of towards zero by . The unweighted version is also used. The stopping criteria and parameters of the algorithm are chosen similarly to [37].

V-B Simulations

We perform experiments on synthetic data, and compare with other methods. We generate a graph using the stochastic blockmodel with nodes, clusters with equal size , and . We apply our method to the graph, where we pick using Algorithm 2 and solve the optimization problem using Algorithm 3. Due to numerical accuracy, the output of our algorithm may not be strictly integer, so we do the following simple rounding: compute the mean of the entries of , and round each entry of to if it is greater than , and otherwise. We measure the error by , which equals the number of misclassified pairs. We say our method succeeds if it misclassifies less than of the pairs.

For comparison, we consider three alternative methods: (1) Single-Linkage clustering (SLINK) [38], which is a hierarchical clustering method that merges the most similar clusters in each iteration. We use the difference of neighbors, namely , as the distance measure of nodes and , and terminate when SLINK finds a clustering with

clusters. (2) A spectral clustering method 

[39], where we run SLINK on the top

singular vectors of

. (3) The low-rank-plus-sparse approach [28, 22], followed by the rounding scheme described in the last paragraph. Note the first two methods assume knowledge of the number of clusters , which is not available to our method.

For each , we find the smallest for which a method succeeds, and average over trials. The results are shown in Figure 1(a), where the area above each curves corresponds to the range of feasible for each method. It can been seen that our method subsumes all others, in that we succeed for a strictly larger range of . Figure 1(b) shows more detailed results for sparse graphs (), for which SLINK and low-rank-plus-sparse approach completely fail, while our method significantly outperforms the spectral method, the only alternative method that works in this regime.

(a) (b)
Fig. 1: (a) Comparison of our method with Single-Linkage clustering (SLINK), spectral clustering, and low-rank-plus-sparse (L+S) approach. The area above each curve is the values of for which a method successfully recovers the underlying true clusters. (b) More detailed results for the area in the box in (a). The experiments are conducted on synthetic data with nodes and clusters with equal size .

Vi Proof of Lemma 1

In this section we prove the monotone lemma. Set . Define and . Let be an arbitrary alternative feasible solution obeying . By optimality of to the original program, we have

Next, by definition of , and , we have

and

where we use for all in the last inequality. Summing the L.H.S. and R.H.S. of the last three display equations establishes that

Since is arbitrary, we conclude that is the unique optimum of the modified program.

Vii Proof of Theorem 1

We prove our main theorem in this section. The proof consists of three main steps, which we elaborate below.

Vii-a Step 1: Reduction to Homogeneous Edge Probabilities

We show that it suffices to assume that the in-cluster edge probability is uniformly , and the across-cluster edge probability is uniformly . In the heterogeneous model, suppose an edge is placed between nodes and with probability if they are in the same cluster, where . This is equivalent to the following two-step model: first flip a coin with head probability , and add the edge if it is head; if it is tail, then flip another coin and add the edge with probability . By the monotone property in Lemma 1, we know that if our convex program succeeds on the graph generated in the first step, then it also succeeds for the second step, because more in-cluster edges are added. A similar argument applies to the across-cluster edges. Therefore, heterogeneous edge probabilities only make the probability of success higher, and thus we only need to prove the homogeneous case.

Vii-B Step 2: Optimality Condition

We need some additional notation. We denote the singular value decomposition of

(notice is symmetric and positive definite) by . For any matrix , we define . For a set of matrix indices, let be the matrix obtained by setting the entries of outside to zero, and we use as a shorthand of . Define the sets and . The true cluster matrix is an optimal solution to the program (2)–(3) if

(8)

for all feasible obeying (3). Suppose there is a matrix that satisfies

(9)

The matrix is a subgradient of at , so . Therefore, (8) is implied by

(10)

The above inequality holds in particular for any feasible of the form with or with . This leads to the following element-wise inequalities:

(11)

It is easy to see that these inequalities are actually equivalent to (10), so together with (9) they form a sufficient condition for the optimality of .

Finding a “dual certificate” obeying the exact conditions (9) and (11) is difficult, and does not guarantee uniqueness of the optimum. Instead, we consider an alternative sufficient condition that only requires a that approximately satisfies the exact conditions. This is done in Proposition 1 below (proved in Section VII-D), which significantly simplifies the construction of . Note that condition (b) in the proposition is a relaxation of the equality in (9), whereas condition (c) tightens (11). Setting and changing equalities to inequalities in the proposition recover the exact conditions.

Proposition 1.

is the unique optimal solution to the program (2)–(3), if there exists a matrix and a number that satisfy the following conditions: (a) , (b) , and (c)

Vii-C Step 3: Constructing

We build a that satisfies the conditions in Proposition 1 w.h.p. We use to denote the all-one column vector in , so is the all-one matrix. Let be the set of diagonal entries. For an to be specified later, we define with given by

where

is the identity matrix. We briefly explain the ideas behind the construction. Each of the matrices

, and is the sum of two terms. The first term is derived from the equalities in condition (c) in Proposition 1. The second term is constructed in such a way that each

is a zero-mean random matrix (due to the randomness in

), so it is likely to have small norms and satisfy conditions (a) and (b). The matrix accounts for the outlier nodes. In particular, it is a diagonal matrix with being non-zero if and only if

The following proposition (proved in Section VII-E) shows that indeed satisfies all the desired conditions w.h.p., hence establishing Theorem 1.

Proposition 2.

Under the conditions in Theorem 1, constructed above satisfies the conditions (a)–(c) in Proposition 1 w.h.p. with

Vii-D Proof of Proposition 1 (Optimality Condition)

Let . Consider any feasible solution and let . The difference between the objective values of