I Introduction
This paper proposes a new algorithm for the following task: given an undirected unweighted graph, assign the nodes into disjoint clusters so that the density of edges within clusters is higher than the edges across clusters. Clustering arises in applications such as a community detection, user profiling, link prediction, collaborative filtering etc. In these applications, one is often given as input a set of similarity relationships (either “1” or “0”) and the goal is to identify groups of similar objects. For example, given the friendship relations on Facebook, one would like to detect tightly connected communities, which is useful for subsequent tasks like customized recommendation and advertisement.
Graphs in modern applications have several characteristics that complicate graph clustering:

Small density gap: the edge density across clusters is only a small additive or multiplicative factor different from within clusters;

Sparsity: the graph is overall very sparse even within clusters;

Outliers: there may exist a large number of nodes that do not belong to any clusters and are loosely connected to the rest of the graph;

High dimensionality: the number of clusters may grow unbounded as a function of the number of nodes , which means the sizes of the clusters can be vanishingly small compared to ;

Heterogeneity:
the cluster sizes, node degrees and edge densities may be nonuniform; there may even exist edges that are not wellmodeled by probabilistic distributions as well as hierarchical cluster structures.
Various large modern datasets and graphs have such characteristics [1, 2]; examples include the web graph, social graphs of various social networks etc. As has been wellrecognized, these characteristics make clustering more difficult. When the difference between incluster and acrosscluster edge densities is small, the clustering structure is less significant and thus harder to detect. Sparsity further reduces the amount of information and makes the problem noisier. In the high dimensional regime, there are many small clusters, which are easy to lose in the noise. Heterogeneous and nonrandom structures in the graphs foil many algorithms that otherwise perform well. Finally, the existence of hierarchical structures and outliers renders many existing algorithms and theoretical results inapplicable, as they fix the number of clusters a priori and force each node to be assigned to a cluster. It is desirable to design an algorithm that can handle all these issues in a principled manner.
Ia Our Contributions
Our algorithmic contribution
is a new method for unweighted graph clustering. It is motivated by the maximumlikelihood estimator for the classical Stochastic Blockmodel
[3] (a.k.a. the Planted Partition Model [4]) for random clustered graphs. In particular, we show that this maximumlikelihood estimator can be written as a linear objective over combinatorial constraints; our algorithm is a convex relaxation of these constraints, yielding a convex program overall. While this is the motivation, it performs well – both in theory and practice – in settings that are not just the standard stochastic blockmodel.Our main analytical result in this paper is theoretical guarantees on its performance; we study it in a semirandom generalized stochastic blockmodel. This model generalizes not only the standard stochastic blockmodel and planted partition model, but many other classical planted models including the planted clique model [5, 6], the planted coloring model [7, 8] and their semirandom variants [9, 10, 11]. Our main result gives the condition (as a function of the in/crosscluster edge densities and , density gap , cluster sizes and numbers of inliers/outliers and ) under which our algorithm is guaranteed to recover the groundtruth clustering. When , the condition reads
here all the parameters are allowed to scale with , the total number of nodes. An analogue result holds for .
While the planted and stochastic block models have a rich literature, this single result shows that our algorithm outperforms every existing method for the standard planted partition/clique/noisycoloring models, and matches them (up to at most logarithmic factors) in all other cases, in the sense that our algorithm succeeds for a larger range of the parameters. In fact, there is evidence indicating that we are close to the boundary at which any polynomialtime algorithm can be expected to work. Moreover, the proof for our main theorem is relatively simple, relying only on standard concentration results. Our simulation study supports our theoretic finding, that the proposed method is effective in clustering noisy graphs and outperforms existing methods.
The rest of the paper is organized as follows: Section IB provides an overview of related work; Section II presents our algorithm; Section III describes the SemiRandom Generalized Stochastic Blockmodel, which is a generalization of the standard stochastic blockmodel, one that allows the modeling of the issues mentioned above; Section IV presents the main results – a performance analysis of our algorithm for the semirandom generalized stochastic blockmodel and a detailed comparison to the existing literature on this problem; Section V provides simulation results; finally, the proof of our theoretic results is given in Sections VI to IX.
IB Related Work
The general field of clustering, or even graph clustering, is too vast for a detailed survey here; we focus on the most related threads, and therein too primarily on work which provides analytical guarantees on the resulting algorithms.
Stochastic block models: Also called “planted models” [3, 4]
, these are arguably the most natural random clustered graph models. In the simplest or standard setting, nodes are partitioned into disjoint equalsized subsets (called the true clusters), and then edges are generated independently and at random, with the probability
of an edge between two nodes in the same cluster higher than the probability when the two nodes are in different clusters. The algorithmic clustering task in this setting is to recover the true clusters given the graph. The parameters and the size of the smallest cluster typically govern whether an algorithm can do this clustering, or not.There is now a long line of analytical work on stochastic block models; we focus on methods that allow for exact recovery (i.e., every node is correctly classified), and summarize the conditions required by known methods in Table
I. As can be seen, we improve over all existing methods by polynomial factors. In addition, and as opposed to several of these methods, we can handle outliers, heterogeneity, hierarchy in clustering etc. A complimentary line of work has investigated lower bounds in this setting; i.e., for what values/scalings of and it is not possible (either for any algorithm, or for any polynomial time algorithm) to recover the underlying true clusters [12, 13, 14]. We discuss these two lines of work in more details in the main results section.Convex methods for matrix decomposition: Our method is related to recent literature on the recovery of lowrank matrices using convex optimization, and in particular the recovery of such matrices from “sparse” perturbations (i.e., where a fraction of the elements of the lowrank matrix are possibly arbitrarily modified, while others are untouched). Sparse and lowrank matrix decomposition using convex optimization was initiated by [24, 25]; followup works [26, 27] have the current stateoftheart guarantees on this problem, and [28] applies it directly to graph clustering.
The method in this paper is Maximum Likelihood, but it can also be viewed as a weighted version of sparse and lowrank matrix decomposition, with different elements of the sparse part penalized differently, based on the given input graph. There is currently no literature or analysis of weighted matrix decomposition; in that sense, while our weights have a natural motivation in our setting, our recovery results are likely to have broader implications, for example robust versions of PCA when not all errors are created equal, but have a corresponding prior. Moreover, our result on lowerbounds immediately implies a tight necessary condition for the standard matrix decomposition problem.
Ii Algorithm
We now describe our algorithm; as mentioned, it is a convex relaxation of Maximum Likelihood (ML) as applied to the standard stochastic blockmodel. So, in what follows, we first develop notation and the exact ML estimator, and then its relaxation.
ML for the standard stochastic blockmodel: Recall that in the standard stochastic blockmodel nodes are partitioned into disjoint clusters, and edges in the graph are chosen independently; the probability of an edge between a pair of nodes in the same cluster is , and for a pair of nodes in different clusters it is . Given the graph, the task is to find the underlying clusters that generated it. To write down the ML estimator for this, let us represent any candidate partition by a corresponding cluster matrix where if and only if nodes and are in the same cluster, and otherwise^{2}^{2}2We adopt the convention that for any node that belongs to a cluster.. Any thus needs to have a blockdiagonal structure, with each block being all ’s.
A vanilla ML estimator then involves optimizing a likelihood subject to the combinatorial constraint that the search space is the cluster matrices. Let be the observed adjacency matrix of the graph^{3}^{3}3We assume for all .; then, the log likelihood function of given is
We notice that this can be written, via a rearrangement of terms, as
(1) 
here collects the terms that are independent of . ML estimator would be maximizing the above expression subject to being a cluster matrix. While the objective is a linear function of , this optimization problem is combinatorial due to the requirement that be a cluster matrix (i.e., blockdiagonal with each block being allones), and is intractable in general.
Our algorithm: We obtain a convex and tractable algorithm by replacing the constraint “ is a cluster matrix” with (i) constraints for all elements , and (ii) a nuclear norm^{4}^{4}4
The nuclear norm of a matrix is the sum of its singular values.
regularizer in the objective. The latter encourages to be lowrank, and is based on the wellestablished insight that a cluster matrix has low rank – in particular, its rank equals to the number of clusters. (We discuss other related relaxations after we present our algorithm.)Also notice that the likelihood expression (1) is linear in and only the ratio of the two coefficients and is important. We thus introduce a parameter which allows us to choose any ratio. This has the advantage that instead of knowing both and , we only need to choose one number (which should be between and ; we remark on how to choose later). This leads to the following convex formulation:
(2)  
s.t.  (3) 
where the weights and are given by
(4) 
Here the factor balances the contributions of the nuclear norm and the likelihood, and the specific forms of and are derived from our analysis. The optimization problem (2)–(3) is convex and can be cast as a Semidefinite Program (SDP) [24, 29]. More importantly, it can be solved using efficient firstorder methods for large graphs (see Section VA).
Our algorithm is given as Algorithm 1. Depending on the given and the choice of , the optimal solution may or may not be a cluster matrix. Checking if a given is a cluster matrix can be done easily, e.g., via an SVD, which will also reveal cluster memberships if it is a cluster matrix. If it is not, any one of several rounding/aggregation ideas (e.g., the one in [30]) can be used empirically; we do not delve into this approach in this paper, and simply output failure. In Section IV we provides sufficient conditions under which is a cluster matrix, with no rounding required.
Iia Remarks about the Algorithm
Note that while we derive our algorithm from the standard stochastic blockmodel, our analytical results hold in a much more general setting. In practice, one could execute the algorithm (with appropriate choice of , and hence and ) on any given graph.
Tighter relaxations: The formulation (2)–(3) is not the only way to relax the nonconvex ML estimator. Instead of the nuclear norm regularizer, a hard constraint may be used. One may further replace this constraint with the positive semidefinite constraint and the linear constraints , both satisfied by any cluster matrix^{5}^{5}5 The constraints are satisfied when there is no outlier.. It is not hard to check that these modifications lead to convex relaxations with smaller feasible sets, so any performance guarantee for our formulation (2)–(3) also applies to these alternative formulations. We choose not to use these potentially tighter relaxations based on the following theoretical and practical considerations: a) These formulations do not work well when the numbers of outliers and clusters are unknown. b) We do not obtain better theoretical guarantees with them. In fact, the work [30] considers these tighter constraints but their exact recovery guarantees are improved by ours. Moreover, as we argue in the next section, our guarantees are likely to be orderwise optimal and thus any alternative convex formulations are unlikely provide significant improvements in a scaling sense. c) Our simpler formulation facilitates efficient solution for large graphs via firstorder methods; we describe one such method in Section VA.
Choice of : Our algorithm requires an extraneous input . For the standard planted clique problem [6, 5] (with cliques planted in a random graph ), one can use (see Section IVC2). For the standard stochastic blockmodel (with nodes partitioned into equalsize clusters and edge probabilities being uniformly and inside and across clusters), the value of can be easily computed from data (see Section IVD). In these cases, our algorithm has no tuning parameters whatsoever and does not require knowledge of the number or sizes of the clusters. For the general setting, should be chosen to lie between and , which now represent the lower/upper bounds for the in/crosscluster edge densities. As such, can be interpreted as the resolution of the clustering algorithm. To see this, suppose the clusters have a hierarchical structure, where each big cluster is partitioned into smaller subclusters with higher edge densities inside. In this case, it is a priori not clear that which level of clusters, the larger ones or the smaller ones, should be recovered. This ambiguity is resolved by specifying : our algorithm recovers those clusters with incluster edge density higher than and crosscluster density lower than . With a larger , the algorithm operates at a higher resolution and detects small clusters with high density. By varying , our algorithm can be turned into a method for multiresolution clustering [1] which explores all levels of the cluster hierarchy. We leave this to future work. Importantly, the above example shows that it is generally impossible to uniquely determine the value of from data.
Iii The Generalized Stochastic Blockmodel
While our algorithm above is derived as a relaxation of ML for the standard stochastic blockmodel, we establish performance guarantees in a much more general setting, which is defined by six parameters , , , , and ; it is described below.
Definition 1 (Generalized Stochastic Blockmodel (GSBM)).
If (, resp.), consider a random graph generated as follows: The nodes are divided into two sets and . The nodes in are further partitioned into disjoint sets, which we will refer to as the “true” clusters. Let be the minimum size of a true cluster. For every pair of nodes that belong to the same true cluster, edge is present in the graph with probability that is at least (at most, resp.) , while for every pair where the nodes are in different clusters the edge is present with probability at most (at least, resp.) . The other nodes in are not in any cluster (we will call them outliers); for each and , there is an edge between the pair with probability at most (at least, resp.) .
Definition 2 (Semirandom GSBM).
On a graph generated from GSBM with (, resp.), an adversary is allowed to arbitrarily (a) add (remove, resp.) edges between nodes in the same true cluster, and (b) remove (add, resp.) edges between pairs of nodes if they are in different clusters, or if at least one of them is an outlier in .
The objective is to find the underlying true clusters, given the graph generated from the semirandom GSBM.
The standard stochastic blockmodel/planted partition model is a special case of GSBM with , all cluster sizes equal to , and all incluster (crosscluster, resp.) probabilities equal to (, resp.). GSBM generalizes the standard models as it allows for heterogeneity in the graph:

and are lower and upper bounds, instead of exact edge probabilities;

is also a lower bound, so clusters can have different sizes;

outliers (nodes not in any cluster) are allowed.
GSBM removes many restrictions in standard planted models and better models practical graphs.
The semirandom GSBM allows for further modeling power. It blends the worst case models, which are often overly pessimistic,^{6}^{6}6For example, the minimum graph bisection problem is NPhard. and the purely random graphs, which are extremely unstructured and have very special properties usually not possessed by realworld graphs [31]. This semirandom framework has been used and studied extensively in the computer science literature as a better model for realworld networks [9, 10, 11]. At first glance, the adversary seems to make the problem easier by adding incluster edges and removing crosscluster edges when . This is not necessarily the case. The adversary can significantly change some statistical properties of the random graph (e.g., alter spectral structure and node degrees, and create local optima by adding dense spots [10]), and foil algorithms that overexploit such properties. For example, some spectral algorithms that work well on random models are proved to fail in the semirandom setting [8]. An algorithm that is robust against an adversary is more likely to work well on realworld graphs. As shown later, our algorithm processes this desired property.
Iiia Special Cases
GSBM recovers as special cases many classical and widely studied models for clustered graphs, by considering different values for the parameters , , , , and . We classify these models into two categories based on the relation between and .
In the next two sections, we describe our algorithm and provide performance guarantees under the semirandom GSBM. This implies guarantees for all the special cases above. We provide a detailed comparison with literature after presenting our results.
Iv Main Results: Performance Guarantees
In this section we provide analytical performance guarantees for our algorithm under the semirandom GSBM. We provide a unified theorem, and then discuss its consequences for various special cases, and compare with literature. We also discuss how to estimate the parameter in the special case of the standard stochastic blockmodel. We shall first consider the case with . The case is a direct consequence and is discussed in Section IVC3. All proofs are postponed to Sections VI to IX.
Iva A Monotone Lemma
Our optimizationbased algorithm has a nice monotone property: adding/removing edges “aligned with” the optimal (as is done by the adversary) cannot result in a different optimum. This is summarized in the following lemma.
Lemma 1.
The lemma shows that our algorithm is inherently robust under the semirandom model. In particular, the algorithm succeeds in recovering the true clusters on the semirandom GSBM as long as it succeeds on GSBM with the same parameters. In the sequel, we therefore focus solely on GSBM, with the understanding that any guarantee for it immediately implies a guarantee for the semirandom variant.
IvB Main Theorem
Let be the matrix corresponding to the true clusters in GSBM, i.e., if and only if and they are in the same true cluster, and 0 otherwise. The theorem below establishes conditions under which our algorithm, specifically the convex program (2)–(3), yields this as the unique optimum (without any further need for rounding etc.) with high probability (w.h.p.). Throughout the paper, with high probability means with probability at least for some universal absolute constant .
Theorem 1.
Our theorem quantifies the tradeoff between the four parameters governing the hardness of GSBM– the minimum incluster edge density , the maximum acrosscluster edge density , the minimum cluster size and the number of outliers – required for our algorithm to succeed, i.e., to recover the underlying true clustering without any error. Note that we can handle any values of and as long as they satisfy the condition in the theorem; in particular, they are allowed to scale with . Interestingly, the theorem does not have an explicit dependence on the number of clusters except via the relation .
We now discuss the tightness of Theorem 1 in terms of these parameters. When , we have a nearmatching converse result.
Theorem 2.
Suppose all clusters have equal size , and the incluster (crosscluster, resp.) edge probabilities are uniformly (, resp.), with and . Under GSBM with and sufficiently large, for any algorithm to correctly recover the clusters with probability at least , we must have
is an absolute constant.
This theorem gives a necessary condition for any algorithm to succeed regardless of its computational complexity. It shows that Theorem 1 is optimal up to logarithmic factors for all values of and when .
Remark.
In the case with , identification of the clusters is equivalent to recovering the rank matrix ^{7}^{7}7The incoherence parameter [25, 24] of is if and the clusters have equal size. and the sparse matrix from their sum , where is the fraction of randomlylocated nonzero entries in . Therefore, Theorem 2 implies a necessary condition for the standard lowrankplussparse problem [24, 25]: we need . This shows that the result in [26] is tight up to logarithmic factors when
For smaller values of , notice that Theorem 1 requires to be , since the left hand side of (6) is less than . This lowerbound is achieved when and are both constants independent of and . There are reasons to believe that this requirement is unlikely to be improvable using polynomialtime algorithms. Indeed, the special case with and corresponds to the classical planted clique problem [32]; finding a clique of size is widely believed to be computationally hard even on average and has been used as a hard problem for cryptographic applications [33, 34].
For other values of and , no general and rigorous converse result exists. However, there are evidences suggesting that no other polynomialtime algorithm is likely to have better guarantees than our result (6). The authors of [13] show, using nonrigorous but deep arguments from statistical physics, that recovering the clustering is impossible in polynomial time if . Moreover, the work in [14] shows that a large class of spectral algorithms fail under the same condition. In view of these results, it is possible that our algorithm is optimal w.r.t. all polynomialtime algorithms for all values of , and .
Several further remarks regarding Theorem 1 are in order.

A nice feature of our result is that we only need to be large only as compared to ; several other existing results (see Table I) require a lower bound (as a function of and ) on itself. When is , we allow and to be as small as .

The number of clusters is allowed to grow rapidly with ; this is called the highdimensional setting [21]. In particular, our algorithm can recover as many as equal size clusters. Any algorithm with a better scaling would recover cliques of size , an unlikely task in polynomial time in light of the hardness of the planted clique problem discussed above.

The number of outliers can also be large, as many as , which is attained when are and . In other words, almost all the nodes can be outliers, and this is true even when there are multiple clusters that are not cliques (i.e., ).

Not all existing methods can handle nonuniform edge probabilities and node degrees, which often require special treatment (see [12]). This issue is addressed seamlessly by our method by definition of GSBM.
In the following subsection, we discuss various planted problems to which Theorem 1 applies and compare with existing work. Our results match the best existing results in all cases (up to logarithm factors), and in many important settings lead to orderwise stronger guarantees.
IvC Consequences and Comparison with Literature
IvC1 Standard Stochastic Blockmodel (a.k.a. Planted Partition Model)
This model assumes that all clusters have the same size with no isolated nodes (), and the incluster and acrosscluster edge probabilities are uniformly and , respectively, with . We compare our result to past approaches and theoretical results in Table I. Our result has the scaling and , which improves on all existing results by polynomial factors. This means that we can handle much noisier and sparser graphs, especially when the number of clusters is growing. A recent paper [35], which appeared after the publication of the conference version [36]
of this manuscript, proposes a tensor approach for graph clustering. Our guarantee is a few logarithmic factors better than their results applied to the standard stochastic blockmodel.
IvC2 Planted Clique Problem
Here the task is to find a set of disjoint cliques, each of size at least , that have been planted in an ErdosRenyi random graphs . Setting in Theorem 1, we obtain the following guarantee for the planted clique problem.
Corollary 1.
IvC3 The Heterophily Case ()
Given a graph generated from the semirandom GSBM with intra/intercluster densities , we can run our algorithm to the graph , where is the allone matrix. Note that can be considered as generated from GSBM with intra/intercluster densities and , where . With this reduction, Lemma 1 and Theorem 1 immediately yield the following guarantee.
Corollary 2.
This corollary immediately yields guarantees for the planted coloring problem [7] and the planted cut [9] (a.k.a. planted noisy coloring [13]) problem. We are not aware of any exiting work that explicitly considers the mirrored GSBM in its general form (, , and with potential nonrandom edges). However, since any guarantee for GSBM with implies a guarantee for GSBM with , Table I provides a comparison with existing work when and the edge probabilities and cluster sizes are uniform. Again our guarantee outperforms all existing ones.
IvC4 Planted Coloring Problems
IvD Estimating in Special Cases
We have argued that specifying in a completely datadriven way is illposed for the general GSBM, e.g., when the clusters are hierarchical. However, for special cases this can be done reliably with strong guarantees. Consider the standard stochastic blockmodel, where all clusters have the same size , the edge probabilities are uniform (i.e., equal to within clusters and across clusters, with ), and there are no outliers () or nonrandom edges. Observe that is a matrix with blocks of and ’s^{8}^{8}8Recall that we use the convention ., which is equal to the Kronecker product of a allone matrix and an circulant matrix with entries equal to on the diagonal and
elsewhere. The allone matrix has one nonzero eigenvalue
, and the circulant matrix has eigenvalues and with multiplicities and , respectively. The eigenvalues of is the product of these two matrices. It follows that the first eigenvalue of is with multiplicity , the second eigenvalue is with multiplicity , and the third eigenvalue is with multiplicities ) [18]. This motivates us to use the eigenvalues of the observed matrix to estimate , and ; see Algorithm 2.
Compute and sort the eigenvalues of , denoted as .

Let . Set .

Set

Set .
The following theorem guarantees that the estimation errors are small.
Theorem 3.
In particular, the estimated satisfies the condition (5) in Theorem 1. The above theorem also ensures that Algorithm 2 is a consistent estimator of the parameters and when condition (6) is satisfied, a result of independent interest. Combining Theorem 1 and Theorem 3, we obtain a complete algorithm that is guaranteed to find the clusters under the standard stochastic blockmodel obeying condition (6), without knowledge of any generative parameters of the underlying model.
V Empirical Results
Va Implementation Issues
The convex program (2)–(3) can be solved using a general purpose SDP solver, but this method does not scale well to problems with more than a few hundred nodes. To facilitate fast and efficient solution, we propose to use a family of firstorder algorithms called Augmented Lagrange Multiplier (ALM) methods. Note that the program (2)–(3) can be rewritten as
(7)  
s.t  
where , the matrix satisfies if and otherwise, and denotes the elementwise matrix product. This problem can be recognized as a generalization of the standard convex formulation of the lowrank and sparse matrix decomposition problem [25, 24], of which the numerical solution has been well studied. We adapt the ALM method in [37] to the above problem, given in Algorithm 3.
Here is the elementwise weighted softthresholding operator, defined as
In other words, it shrinks each entry of towards zero by . The unweighted version is also used. The stopping criteria and parameters of the algorithm are chosen similarly to [37].
VB Simulations
We perform experiments on synthetic data, and compare with other methods. We generate a graph using the stochastic blockmodel with nodes, clusters with equal size , and . We apply our method to the graph, where we pick using Algorithm 2 and solve the optimization problem using Algorithm 3. Due to numerical accuracy, the output of our algorithm may not be strictly integer, so we do the following simple rounding: compute the mean of the entries of , and round each entry of to if it is greater than , and otherwise. We measure the error by , which equals the number of misclassified pairs. We say our method succeeds if it misclassifies less than of the pairs.
For comparison, we consider three alternative methods: (1) SingleLinkage clustering (SLINK) [38], which is a hierarchical clustering method that merges the most similar clusters in each iteration. We use the difference of neighbors, namely , as the distance measure of nodes and , and terminate when SLINK finds a clustering with
clusters. (2) A spectral clustering method
[39], where we run SLINK on the topsingular vectors of
. (3) The lowrankplussparse approach [28, 22], followed by the rounding scheme described in the last paragraph. Note the first two methods assume knowledge of the number of clusters , which is not available to our method.For each , we find the smallest for which a method succeeds, and average over trials. The results are shown in Figure 1(a), where the area above each curves corresponds to the range of feasible for each method. It can been seen that our method subsumes all others, in that we succeed for a strictly larger range of . Figure 1(b) shows more detailed results for sparse graphs (), for which SLINK and lowrankplussparse approach completely fail, while our method significantly outperforms the spectral method, the only alternative method that works in this regime.
(a)  (b) 
Vi Proof of Lemma 1
In this section we prove the monotone lemma. Set . Define and . Let be an arbitrary alternative feasible solution obeying . By optimality of to the original program, we have
Next, by definition of , and , we have
and
where we use for all in the last inequality. Summing the L.H.S. and R.H.S. of the last three display equations establishes that
Since is arbitrary, we conclude that is the unique optimum of the modified program.
Vii Proof of Theorem 1
We prove our main theorem in this section. The proof consists of three main steps, which we elaborate below.
Viia Step 1: Reduction to Homogeneous Edge Probabilities
We show that it suffices to assume that the incluster edge probability is uniformly , and the acrosscluster edge probability is uniformly . In the heterogeneous model, suppose an edge is placed between nodes and with probability if they are in the same cluster, where . This is equivalent to the following twostep model: first flip a coin with head probability , and add the edge if it is head; if it is tail, then flip another coin and add the edge with probability . By the monotone property in Lemma 1, we know that if our convex program succeeds on the graph generated in the first step, then it also succeeds for the second step, because more incluster edges are added. A similar argument applies to the acrosscluster edges. Therefore, heterogeneous edge probabilities only make the probability of success higher, and thus we only need to prove the homogeneous case.
ViiB Step 2: Optimality Condition
We need some additional notation. We denote the singular value decomposition of
(notice is symmetric and positive definite) by . For any matrix , we define . For a set of matrix indices, let be the matrix obtained by setting the entries of outside to zero, and we use as a shorthand of . Define the sets and . The true cluster matrix is an optimal solution to the program (2)–(3) if(8) 
for all feasible obeying (3). Suppose there is a matrix that satisfies
(9) 
The matrix is a subgradient of at , so . Therefore, (8) is implied by
(10) 
The above inequality holds in particular for any feasible of the form with or with . This leads to the following elementwise inequalities:
(11)  
It is easy to see that these inequalities are actually equivalent to (10), so together with (9) they form a sufficient condition for the optimality of .
Finding a “dual certificate” obeying the exact conditions (9) and (11) is difficult, and does not guarantee uniqueness of the optimum. Instead, we consider an alternative sufficient condition that only requires a that approximately satisfies the exact conditions. This is done in Proposition 1 below (proved in Section VIID), which significantly simplifies the construction of . Note that condition (b) in the proposition is a relaxation of the equality in (9), whereas condition (c) tightens (11). Setting and changing equalities to inequalities in the proposition recover the exact conditions.
ViiC Step 3: Constructing
We build a that satisfies the conditions in Proposition 1 w.h.p. We use to denote the allone column vector in , so is the allone matrix. Let be the set of diagonal entries. For an to be specified later, we define with given by
where
is the identity matrix. We briefly explain the ideas behind the construction. Each of the matrices
, and is the sum of two terms. The first term is derived from the equalities in condition (c) in Proposition 1. The second term is constructed in such a way that eachis a zeromean random matrix (due to the randomness in
), so it is likely to have small norms and satisfy conditions (a) and (b). The matrix accounts for the outlier nodes. In particular, it is a diagonal matrix with being nonzero if and only ifViiD Proof of Proposition 1 (Optimality Condition)
Let . Consider any feasible solution and let . The difference between the objective values of
Comments
There are no comments yet.