Achieving Exact Cluster Recovery Threshold via Semidefinite Programming: Extensions

02/26/2015 ∙ by Bruce Hajek, et al. ∙ University of Illinois at Urbana-Champaign berkeley college 0

Resolving a conjecture of Abbe, Bandeira and Hall, the authors have recently shown that the semidefinite programming (SDP) relaxation of the maximum likelihood estimator achieves the sharp threshold for exactly recovering the community structure under the binary stochastic block model of two equal-sized clusters. The same was shown for the case of a single cluster and outliers. Extending the proof techniques, in this paper it is shown that SDP relaxations also achieve the sharp recovery threshold in the following cases: (1) Binary stochastic block model with two clusters of sizes proportional to network size but not necessarily equal; (2) Stochastic block model with a fixed number of equal-sized clusters; (3) Binary censored block model with the background graph being Erdős-Rényi. Furthermore, a sufficient condition is given for an SDP procedure to achieve exact recovery for the general case of a fixed number of clusters plus outliers. These results demonstrate the versatility of SDP relaxation as a simple, general purpose, computationally feasible methodology for community detection.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The stochastic block model (SBM) [27], also known as the planted partition model [15], is a popular statistical model for studying the community detection and graph partitioning problem (see, e.g., [30, 16, 35, 33, 29, 13, 14, 3] and the references therein). In its simple form, it assumes that out of a total of vertices, of them are partitioned into clusters with sizes and the remaining vertices do not belong to any clusters (called outlier vertices); a random graph

is then generated based on the cluster structure, where each pair of vertices is connected independently with probability

if they are in the same cluster or otherwise. In this paper, we focus on the problem of exactly recovering the clusters (up to a permutation of cluster indices) based on the graph .

In the setting of two equal-sized clusters or a single cluster plus outlier vertices, recently it has been shown in [24] that the semidefinite programming (SDP) relaxation of the maximum likelihood (ML) estimator achieves the optimal recovery threshold with high probability, in the asymptotic regime of and for fixed constants and cluster sizes growing linearly in as . The result for two equal-sized clusters was originally conjectured in [2] and another resolution was recently given in [10] independently.

In this paper, we extend the optimality of SDP to the following three cases, while still assuming and with :

  • Stochastic block model with two asymmetric clusters: the first cluster consists of vertices and the second cluster consists of vertices with for some The value of may be known or unknown to the recovery procedure.

  • Stochastic block model with clusters of equal size : is a fixed integer and .

  • Censored block model with two clusters: given an Erdős-Rényi random graph , each edge has a label independently drawn according to the distribution:

    where if vertex is in the first cluster and otherwise; is a fixed constant.111Under the censored block model, the graph itself does not contain any information about the underlying clusters and we are interested in recovering the clusters by observing the graph and edge labels.

In all three cases, we show that a necessary condition for the maximum likelihood (ML) estimator to succeed coincides with a sufficient condition for the correctness of the SDP procedure, thereby establishing both the optimal recovery threshold and the optimality of the SDP relaxation. The proof techniques in this paper are similar to those in [24]; however, the construction and validation of dual certificates for the success of SDP are more challenging especially in the multiple-cluster case. Notably, we resolve an open problem raised in [1, Section 6] about the optimal recovery threshold in the censored block model and show that the optimal recovery threshold can be achieved in polynomial-time via SDP.

To further investigate the applicability of SDP procedures for community detection, we explored two cases for which the algorithm is adaptive to the unknown cluster sizes. First, we found that for two clusters, the conditions for exact recovery are the strongest in the equal-sized case. This suggests, and it is shown in Section 2.2, that if the cluster size constraint is replaced by an appropriate Lagrangian term not depending on the cluster size, exact recovery is achieved for all cluster sizes under the condition required for two equal-sized clusters. Secondly, we examined the general community detection problem with a fixed number of unequal-sized clusters and with outlier vertices, and identified a sufficient condition for the SDP procedure to achieve exact recovery with knowledge of only the smallest cluster size and the parameters (See sec:general_case.)

The optimality result of SDP has recently been extended to the cases with number of equal-sized clusters in [4] and a fixed number of clusters with unequal sizes in [37].

Parallel independent work

The exact recovery problem in the logarithmic sparsity regime has been independently studied in [3] in a more general setting: Given a fixed matrix

and a probability vector

, the cardinality of the community is assumed to be and vertices in the and community are connected independently with probability . The optimal recovery threshold is obtained as a function of and . In the special setting of if and if , for two asymmetric clusters or multiple equal-sized clusters, their general optimal threshold reduces to those derived in this paper. Assuming full knowledge of the parameters and , the optimal recovery threshold is further shown in [3] to be achievable in time for all via a two-phase procedure, consisting of a partial recovery algorithm followed by a cleanup step.

For the case of equal-sized clusters, it is independently shown in [44] that the optimal recovery threshold can be obtained in polynomial-time. Their clustering algorithm is a two-step procedure similar to [3], where the partial recovery is achieved via a simple spectral algorithm. For the case with two unequal-sized clusters, a sufficient (but not tight) recovery condition is also derived in [44].

Further literature on SDP for cluster recovery

There has been a recent surge of interest in analyzing the semidefinite programming relaxation approach for cluster recovery; some of the latest development are summarized below. For different recovery approaches such as spectral methods, we refer the reader to [14, 3] for details.

The SDP approach is mostly analyzed in the regime where the average degrees scale as , with the objective of exact cluster recovery. In this setting, the analysis often relies on the standard technique of dual witnesses, which amounts to constructing the dual variables so that the desired KKT conditions are satisfied for the primal variable corresponding to the true clusters. The SDP has been applied to recover cliques or densest subgraphs in [6, 7, 5]. For the stochastic block model with possibly unbounded number of clusters, a sufficient condition for an SDP procedure to achieve exact recovery is obtained in [14], which improves the sufficient conditions in [13, 36] in terms of scaling. Various formulations of SDP for cluster recovery are discussed in [8]. The robustness of the SDP has been investigated in [18] for minimum bisection in the semirandom model with monotone adversary and, more recently, in [11] for generalized SBM with arbitrary outlier vertices. The SDP machinery has also been applied to recover clusters with partially observed graphs [12, 41] and binary matrices [43]. In the converse direction, necessary conditions for the success of particular SDPs are obtained in [42, 14]. In contrast to the previous work mentioned above where the constants are often loose, the recent line of work initiated by [2, 1], and followed by [24, 10] and the current paper, focus on establishing necessary and sufficient conditions in the special case of a fixed number of clusters with sharp constants, attained via SDP relaxations.

In the sparse graph case with bounded average degree, exact recovery is provably impossible and instead the goal is to achieve partial recovery, namely, to correctly cluster all but a small fraction of vertices. Using Grothendieck’s inequality, a sufficient condition for SDP to achieve partial recovery is obtained in [21]; the technique is extended to the labeled stochastic block model in [28]. In [32], an SDP-based test is applied to distinguish the binary symmetric stochastic block model versus the Erdős-Rényi random graph and shown to attain the optimal detection threshold.

Notation

Denote the identity matrix by

, the all-one matrix by and the all-one vector by . We write if is symmetric and positive semidefinite and if all the entries of are non-negative. Let denote the set of all symmetric matrices. For , let

denote its second smallest eigenvalue. For any matrix

, let denote its spectral norm. For any positive integer , let . For any set , let denote its cardinality and denote its complement. For , let . We use standard big notations, e.g., for any sequences and , if there is an absolute constant such that ; or if there exists an absolute constant such that . Let

denote the Bernoulli distribution with mean

and

denote the binomial distribution with

trials and success probability . All logarithms are natural and we use the convention .

2 Binary asymmetric SBM

2.1 Known cluster size

Let denote the adjacency matrix of the graph, and denote the underlying true partition, where the clusters and have cardinalities and , respectively, and we consider the asymptotic regime as for fixed. In this subsection we assume that is known to the recovery procedure and the goal is to obtain the -dependent optimal recovery threshold attained by SDP relaxations.

The cluster structure under the binary stochastic block model can be represented by a vector such that if vertex is in the first cluster and otherwise. Let correspond to the true clusters. Then the ML estimator of for the case can be simply stated as

s.t.
(1)

which maximizes the number of in-cluster edges minus the number of out-cluster edges subject to the cluster size constraint. If , eq:SBMML1_unbalanced reduces to the minimum graph bisection problem which is NP-hard in the worst case. Due to the computational intractability of the ML estimator, next we turn to its convex relaxation. Let . Then is equivalent to , and if and only if . Therefore, eq:SBMML1_unbalanced can be recast as222Henceforth, all matrix variables in the optimization are symmetric.

s.t.
(2)

Notice that any feasible solution is a rank-one positive semidefinite matrix. Relaxing this condition by dropping the rank-one restriction, we obtain the following convex relaxation of eq:SBMML2_unbalanced, which is a semidefinite program:

s.t.
(3)

We note that the only model parameter needed by the estimator eq:SBMconvex_unbalanced is the cluster size .

Let correspond to the true partition and denote the set of all admissible partitions. The following result establishes the optimality of the SDP procedure.

Theorem 1.

If , then as , where

(4)

with , , , and

The proof of thm:SBMSharp_unbalanced is similar in outline to the proof given in [24], but a considerable detour is needed to handle the imbalance. Notice that by definition, , and . The threshold function turns out to be the error exponent in the following large deviation events. For vertex , let denotes the number of edges between vertex and vertices in , and define similarly. Then,

Next we prove a converse for thm:SBMSharp_unbalanced which shows that the recovery threshold achieved by the SDP relaxation is in fact optimal.

Theorem 2.

If and is uniformly chosen over , then for any sequence of estimators , .

In the special case with two equal-sized clusters, we have and . The corresponding threshold has been established in [2, 34], and the achievability by SDP has been shown in [24] and independently by [10] later.

A recent work [44] also studies the exact recovery problem in the unbalanced case and provides the sufficient (but not tight) recovery condition for a polynomial-time two-step procedure based on the spectral method.

2.2 Unknown cluster size

thm:SBMSharp shows that if one knows the relative cluster size , the SDP relaxation eq:SBMconvex_unbalanced achieves the size-dependent optimal threshold . For fixed and , is minimized at (see app:eta for a proof). This suggests that for two communities the equal-sized case is the most difficult to cluster. Indeed, the next result proves that if there is no constraint on the cluster size, then the optimal recovery threshold coincides with that in the balanced case, i.e., , which can be achieved by a penalized SDP.

Theorem 3.

Let

s.t.
(5)

where and . If , then as , where .

Remark 1.

thm:SDP2 holds for all cluster sizes including the extreme case where the entire network forms a single cluster (), in which case the SDP eq:SDP2 outputs with high probability. The downside is that the penalization parameter depends on the parameters and . Nevertheless, there exists a fully data-driven choice of based on the degree distribution of the network, so that thm:SDP2 continues to hold whenever the cluster sizes scale linearly, i.e., ; the price to pay for adaptivity is that the probability of error vanishes polylogarithmically instead of polynomially as . See app:thmSDP2 for details.

3 SBM with multiple equal-sized clusters

The cluster structure under the stochastic block model with clusters of equal size can be represented by binary vectors , where is the indicator function of the cluster , such that if vertex is in cluster and otherwise. Let correspond to the true clusters and let denote the adjacency matrix. Then the maximum likelihood (ML) estimator of for the case can be simply stated as

s.t.
(6)

which maximizes the number of in-cluster edges. Alternatively, one can encode the cluster structure from the vertices’ perspective. Each vertex is associated with a vector which is allowed to be one of the vectors defined as follows: Take an equilateral simplex in with vertices such that and for . Notice that for . Therefore, the ML estimator given in eq:SBMML1 can be recast as

s.t.
(7)

When , the above program includes the NP-hard minimum graph bisection problem as a special case. Let us consider its convex relaxation similar to the SDP relaxation studied by Goemans and Williamson [20] for MAX CUT and by Frieze and Jerrum [19] for MAX -CUT and MAX BISECTION. To obtain an SDP relaxation, we replace by which is allowed to be any unit vector in under the constraint and . Defining such that , we obtain an SDP:

s.t.
(8)

We remark that we could as well have worked with the constraint , which, for , is equivalent to the last constraint in eq:SBMML3. Letting , we can also equivalently rewrite eq:SBMML3 as

s.t.
(9)

The only model parameter needed by the estimator eq:SDP_RZ is the cluster size . Let correspond to the true clusters and define

The sufficient condition for the success of SDP in eq:SDP_RZ is given as follows.

Theorem 4.

If , then as .

The following result establishes the optimality of the SDP procedure.

Theorem 5.

If and the clusters are uniformly chosen at random among all -equal-sized partitions of , then for any sequence of estimators , as .

The optimal recovery threshold is also obtained by two parallel independent works [44, 3] via a polynomial-time two-step procedure, consisting of a partial recovery algorithm followed by a cleanup stage. The previous work [14] studies the stochastic block model in a much more general setting with clusters of equal size plus outlier vertices, where and the edge probabilities may scale with arbitrarily as long as ; it is shown that an SDP achieves exact recovery with high probability provided that

(10)

for some universal constant . In the special setting where the network consists of a fixed number of clusters without outliers and , the sufficient condition eq:CX14 simplifies to for some absolute constant , which is off by a constant factor compared to the sharp sufficient condition given by thm:SBMSharp.

It is straightforward to extend the current proof of thm:converse to the regime where , for any fixed and , showing that SDP achieves the optimal recovery threshold . Indeed, the preprint [4] shows similar optimality results of SDP for number of equal-sized clusters. Conversely, it has been recently proved in [25] that SDP relaxations cease to be optimal for logarithmically many communities in the sense that SDP is constantwise suboptimal when for a large enough constant and orderwise suboptimal when .

4 Binary censored block model

Under the binary censored block model, with possibly unequal cluster sizes, the cluster structure can be represented by a vector such that if vertex is in the first cluster and if vertex is in the second cluster. Let correspond to the true clusters. Let denote the weighted adjacency matrix such that if are not connected by an edge; if are connected by an edge with label ; if are connected by an edge with label . Then the ML estimator of can be simply stated as

s.t. (11)

which maximizes the number of in-cluster edges minus that of in-cluster edges, or equivalently, maximizes the number of cross-cluster edges minus that of cross-cluster edges. The NP-hard max-cut problem can be reduced to eq:SBMML1_labeled by simply labeling all the edges in the input graph as edges, and thus eq:SBMML1_labeled is computationally intractable in the worst case. Instead, we consider the SDP studied in [1] obtained by convex relaxation. Let . Then is equivalent to . Therefore, eq:SBMML1 can be recast as

s.t.
(12)

Replacing the rank-one constraint by positive semidefiniteness, we obtain the following convex relaxation of eq:SBMML2_labeled, which is an SDP:

s.t.
(13)

We remark that eq:SBMconvex_labeled does not rely on any knowledge of the model parameters. Let and . The following result establishes the success condition of the SDP procedure in the scaling regime for a fixed constant :

Theorem 6.

If , then as .

Next we prove a converse for thm:SBMSharp_labeled which shows that the recovery threshold achieved by the SDP relaxation is in fact optimal.

Theorem 7.

If and is uniformly chosen from , then for any sequence of estimators , as .

thm:PlantedSharpConverse_labeled still holds if the cluster sizes are proportional to and known to the estimators, i.e., the prior distribution of is uniform over for with .

Denote by the optimal recovery threshold, namely, the infimum of such that exact cluster recovery is possible with probability converging to one as . Our results show that for all , the optimal recovery threshold is given by

(14)

and can be achieved by the SDP relaxations. The optimal recovery threshold is insensitive to , which is in contrast to what we have seen for the binary stochastic block model.

Exact cluster recovery in the censored block model is previously studied in [1] and it is shown that if , the maximum likelihood estimator achieves the optimal recovery threshold , while an SDP relaxation of the ML estimator succeeds if . The optimal recovery threshold for any fixed and whether it can be achieved in polynomial-time were previously unknown. thm:SBMSharp_labeled and thm:PlantedSharpConverse_labeled together show that the SDP relaxation achieves the optimal recovery threshold for any fixed constant . Notice that when . For the censored block model with the background graph being random regular graph, it is further shown in [23] that the SDP relaxations also achieve the optimal exact recovery threshold.

The above exact recovery threshold in the regime shall be contrasted with the positively correlated recovery threshold in the sparse regime for constant . In this sparse regime, there exists at least a constant fraction of vertices with no neighbors and exactly recovering the clusters is hopeless; instead, the goal is to find an estimator positively correlated with up to a global flip of signs. It was conjectured in [26] that the positively correlated recovery is possible if and only if ; the converse part is shown in [28] and recently it is proved in [38] that spectral algorithms achieve the sharp threshold in polynomial-time.

5 An SDP for general cluster structure

In this section we consider SDPs for the general case of multiple clusters and outliers. We assume there are clusters with sizes and outlier vertices. Vertices in the same cluster are connected with probability , while other pairs of vertices are connected between them with probability We consider the asymptotic regime and as for fixed, with Let We derive sufficient conditions for exact recovery by SDPs. While the conditions are not the tightest possible for specific cases, we would like to identify an algorithm that recovers the cluster matrix exactly without knowing the details of the cluster structure. As in sec:rary, the true cluster matrix can be expressed as where is the indicator function of the cluster. Denote by the collection of all such cluster matrices.

Consider the SDP

(15)

Implementing the SDP (15) requires no knowledge of the density parameters and , the number of clusters or the sizes of the individual clusters; but it does require the exact knowledge of the sum as well as the sum of squares of the cluster sizes, which, in practical applications, may be unrealistic to assume. Therefore, similar to eq:SDP2, we also consider the following penalized SDP, obtained by removing the constraints for those two quantities while augmenting the objective function:

(16)

Here the penalization parameters and must be specified.

Clearly the above two SDPs are different and need not have the same solutions; nevertheless, they are similar enough so that in the following theorem we state a sufficient condition for either of the SDPs to exactly recover with high probability. Define

(17)

For fixed, is a strictly convex, nonnegative function in which is zero if and only if

Theorem 8.

Suppose there exists and with such that

(18)
(19)
(20)
(21)

(with the understanding that (19) and (20) can be dropped if there is only one cluster (i.e. ) and (21) can be dropped unless there is only one cluster plus outlier vertices). Let for a sufficiently large constant and let If is produced by either SDP (15) or SDP (16), then

We examine two simpler sufficient conditions for recovery, assuming we have enough information to implement one of the two SDPs, and we also have a lower bound on the ’s, but we don’t know how many clusters there are nor whether there are outlier vertices. The conditions of Theorem 8 are most stringent when there are two clusters of the smallest possible size and in that case we get the tightest result from the theorem by selecting yielding the following corollary:

Corollary 1.

Let be the solution to (It satisfies ) If then

There is no simple expression for in Corollary 1. If instead we consider the equation we have the smaller but explicit solution where Using this in the test we obtain the following weaker but more explicit recovery condition, which, nevertheless, is within a factor of eight of the necessary condition (see rmk:factor8 below):

Corollary 2.

If then (If SDP (16) is used, it is assumed that and are selected as in Theorem 8, namely, and where )

Remark 2.

Let us compare the sufficient condition provided by Corollary 2 with necessary conditions for recovery. In the presence of outliers, is a necessary condition as shown in [24, Theorem 4], for otherwise we can swap a vertex in the smallest cluster with an outlier vertex to increase the number of in-cluster edges. Also, with at least two clusters,

(22)

is necessary, because we could have two smallest clusters of sizes and even if a genie were to reveal all the other clusters, we would still need (22) to recover the two smallest ones, as shown by [2, Theorem 1]. By Lemma 11, ; so with or without outliers, is necessary. By Lemma 12, . Therefore we conclude that the sufficient condition of Corollary 2 is within a factor of four (resp. eight) of the necessary condition in the presence (resp. absence) of outliers.

6 Conclusions

This paper shows that the SDP procedure works for recovering community structure at the asymptotically optimal threshold in various important settings beyond the case of two equal-sized clusters or that of a single cluster and outliers considered in [24]. In particular, SDP relaxations works asymptotically optimally for two unequal clusters (with or without knowing the cluster size), or equal clusters, or the binary censored block model with the background graph being Erdős-Rényi. These results demonstrate the versatility of SDP relaxation as a simple, general purpose, computationally feasible methodology for community detection.

The picture is less impressive when these cases are combined to have a general case with clusters of various sizes plus outliers. Still, we found that an SDP procedure can achieve exact recovery even without the knowledge of the cluster sizes; the sufficient condition for recovery is within a factor of eight of the necessary information-theoretic bound. An interesting open problem is whether the SDP relaxation can achieve the optimal recovery threshold in this general case. The preprint [37] addresses this problem, showing that the SDP relaxation still achieves the optimal threshold for recovering a fixed number of clusters with unequal sizes.

7 Proofs

7.1 Proofs for sec:asym: Binary asymmetric SBM

Lemma 1 ([24, Lemma 2]).

Let and for and , where for some as . Let be such that and for some and . Then

(23)
(24)
Lemma 2.

Suppose , and either or Let and be independent with and where and as . Let such that . If

(25)

where

with

Furthermore, for any such that ,

(26)
Proof.

We first prove the upper tail bound in eq:tail2 using Chernoff’s bound. In particular,

(27)

where . Let , and . By definition,

Since is concave in , it achieves the supremum at such that

It suggests that when , we choose

with . Thus, using the inequality that , we have

Then in view of eq:chernoff,

If , , and , then we let

with . It follows that

and thus the upper tail bound in eq:tail1 holds in view of eq:chernoff. Next, we prove the lower tail bound in eq:tail1.

Case 1: