Computational Lower Bounds for Community Detection on Random Graphs

06/25/2014 ∙ by Bruce Hajek, et al. ∙ University of Illinois at Urbana-Champaign 0

This paper studies the problem of detecting the presence of a small dense community planted in a large Erdős-Rényi random graph G(N,q), where the edge probability within the community exceeds q by a constant factor. Assuming the hardness of the planted clique detection problem, we show that the computational complexity of detecting the community exhibits the following phase transition phenomenon: As the graph size N grows and the graph becomes sparser according to q=N^-α, there exists a critical value of α = 2/3, below which there exists a computationally intensive procedure that can detect far smaller communities than any computationally efficient procedure, and above which a linear-time procedure is statistically optimal. The results also lead to the average-case hardness results for recovering the dense community and approximating the densest K-subgraph.



There are no comments yet.


page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Networks often exhibit community structure with many edges joining the vertices of the same community and relatively few edges joining vertices of different communities. Detecting communities in networks has received a large amount of attention and has found numerous applications in social and biological sciences, etc (see, e.g., the exposition [23] and the references therein). While most previous work focuses on identifying the vertices in the communities, this paper studies the more basic problem of detecting the presence of a small community in a large random graph, proposed recently in [8]. This problem has practical applications including detecting new events and monitoring clusters, and is also of theoretical interest for understanding the statistical and algorithmic limits of community detection [15].

Inspired by the model in [8], we formulate this community detection problem as a planted dense subgraph detection (PDS) problem. Specifically, let denote the Erdős-Rényi random graph with vertices, where each pair of vertices is connected independently with probability . Let denote the planted dense subgraph model with vertices where: (1) each vertex is included in the random set independently with probability ; (2) for any two vertices, they are connected independently with probability if both of them are in and with probability otherwise, where . In this case, the vertices in form a community with higher connectivity than elsewhere. The planted dense subgraph here has a random size with mean , which is similar to the models adopted in [16, 34, 35, 22, 31], instead of a deterministic size as assumed in [8, 38, 15].

Definition 1.

The planted dense subgraph detection problem with parameters , henceforth denoted by , refers to the problem of distinguishing hypotheses:

The statistical difficulty of the problem depends on the parameters . Intuitively, if the expected dense subgraph size decreases, or if the edge probabilities and both decrease by the same factor, or if decreases for fixed, the distributions under the null and alternative hypotheses become less distinguishable. Recent results in [8, 38] obtained necessary and sufficient conditions for detecting planted dense subgraphs under certain assumptions of the parameters. However, it remains unclear whether the statistical fundamental limit can always be achieved by efficient procedures. In fact, it has been shown in [8, 38] that many popular low-complexity tests, such as total degree test, maximal degree test, dense subgraph test, as well as tests based on certain convex relaxations, can be highly suboptimal. This observation prompts us to investigate the computational limits for the PDS problem, i.e., what is the sharp condition on under which the problem admits a computationally efficient test with vanishing error probability, and conversely, without which no algorithm can detect the planted dense subgraph reliably in polynomial time. To this end, we focus on a particular case where the community is denser by a constant factor than the rest of the graph, i.e., for some constant . Adopting the standard reduction approach in complexity theory, we show that the PDS problem in some parameter regime is at least as hard as the planted clique problem in some parameter regime, which is conjectured to be computationally intractable. Let denote the planted clique model in which we add edges to vertices uniformly chosen from to form a clique.

Definition 2.

The PC detection problem with parameters , denoted by henceforth, refers to the problem of distinguishing hypotheses:

The problem of finding the planted clique has been extensively studied for and the state-of-the-art polynomial-time algorithms [4, 20, 32, 21, 17, 6, 18] only work for . There is no known polynomial-time solver for the PC problem for and any constant . It is conjectured [26, 25, 27, 2, 22] that the PC problem cannot be solved in polynomial time for with , which we refer to as the PC Hypothesis.

Hypothesis 1.

Fix some constant . For any sequence of randomized polynomial-time tests such that ,

The PC Hypothesis with is similar to [30, Hypothesis 1] and [11, Hypothesis ]. Our computational lower bounds require that the PC Hypothesis holds for any positive constant . An even stronger assumption that PC Hypothesis holds for has been used in [7, Theorem 10.3] for public-key cryptography. Furthermore, [22, Corollary 5.8] shows that under a statistical query model, any statistical algorithm requires at least queries for detecting the planted bi-clique in an Erdős-Rényi random bipartite graph with edge probability .

1.1 Main Results

We consider the problem in the following asymptotic regime:


where is a fixed constant, governs the sparsity of the graph,111The case of is not interesting since detection is impossible even if the planted subgraph is the entire graph (). and captures the size of the dense subgraph. Clearly the detection problem becomes more difficult if either increases or decreases. Assuming the PC Hypothesis holds for any positive constant , we show that the parameter space of is partitioned into three regimes as depicted in fig:phase:

  • The Simple Regime: . The dense subgraph can be detected in linear time with high probability by thresholding the total number of edges.

  • The Hard Regime: . Reliable detection can be achieved by thresholding the maximum number of edges among all subgraphs of size ; however, no polynomial-time solver exists in this regime.

  • The Impossible Regime: . No test can detect the planted subgraph regardless of the computational complexity.




Figure 1: The simple (green), hard (red), impossible (gray) regimes for detecting the planted dense subgraph.

The computational hardness of the PDS problem exhibits a phase transition at the critical value : For moderately sparse graphs with , there exists a combinatorial algorithm that can detect far smaller communities than any efficient procedures; For highly sparse graphs with , optimal detection is achieved in linear time based on the total number of edges. Equivalently, attaining the statistical detection limit is computationally tractable only in the large-community regime (). Therefore, surprisingly, the linear-time test based on the total number of edges is always statistically optimal among all computationally efficient procedures in the sense that no polynomial-time algorithm can reliably detect the community when . It should be noted that fig:phase only captures the leading polynomial term according to the parametrization eq:scaling; at the boundary , it is plausible that one needs to go beyond simple edge counting in order to achieve reliable detection. This is analogous to the planted clique problem where the maximal degree test succeeds if the clique size satisfies [29] and the more sophisticated spectral method succeeds if [4].

The above hardness result should be contrasted with the recent study of community detection on the stochastic block model, where the community size scales linearly with the network size. When the edge density scales as [34, 35, 31] (resp. [1, 36, 24]), the statistically optimal threshold for partial (resp. exact) recovery can be attained in polynomial time up to the sharp constants. In comparison, this paper focuses on the regime when the community size grows sublinearly as and the edge density decays more slowly as . It turns out that in this case even achieving the optimal exponent is computationally as demanding as solving the planted clique problem.

Our computational lower bound for the PDS problem also implies the average-case hardness of approximating the planted dense subgraph or the densest -subgraph of the random graph ensemble , complementing the worst-case inapproximability result in [3], which is based on the planted clique hardness as well. In particular, we show that no polynomial-time algorithm can approximate the planted dense subgraph or the densest -subgraph within any constant factor in the regime of , which provides a partial answer to the conjecture made in [15, Conjecture 2.6] and the open problem raised in [3, Section 4] (see sec:recovery). Our approach and results can be extended to the bipartite graph case (see sec:bipartite) and shed light on the computational limits of the PDS problem with a fixed planted dense subgraph size studied in [8, 38] (see sec:fixedsize).

1.2 Connections to the Literature

This work is inspired by an emerging line of research (see, e.g., [28, 10, 11, 14, 30, 15, 39]) which examines high-dimensional inference problems from both the statistical and computational perspectives. Our computational lower bounds follow from a randomized polynomial-time reduction scheme which approximately reduces the PC problem to the PDS problem of appropriately chosen parameters. Below we discuss the connections to previous results and highlight the main technical contributions of this paper.

Pc Hypothesis

Various hardness results in the theoretical computer science literature have been established based on the PC Hypothesis with , e.g. cryptographic applications [27], approximating Nash equilibrium [25], testing -wise independence [2], etc. More recently, the PC Hypothesis with

has been used to investigate the penalty incurred by complexity constraints on certain high-dimensional statistical inference problems, such as detecting sparse principal components

[11] and noisy biclustering (submatrix detection) [30]. Compared with most previous works, our computational lower bounds rely on the stronger assumption that the PC Hypothesis holds for any positive constant . An even stronger assumption that PC Hypothesis holds for has been used in [7] for public-key cryptography. It is an interesting open problem to prove that PC Hypothesis for a fixed follows from that for .

Reduction from the Pc Problem

Most previous work [25, 2, 3, 7] in the theoretical computer science literature uses the reduction from the PC problem to generate computationally hard instances of problems and establish worst-case hardness results; the underlying distributions of the instances could be arbitrary. Similarly, in the recent works [11, 30] on the computational limits of certain minimax inference problems, the reduction from the PC problem is used to generate computationally hard but statistically feasible instances of their problems; the underlying distributions of the instances can also be arbitrary as long as they are valid priors on the parameter spaces. In contrast, here our goal is to establish the average-case hardness of the PDS problem based on that of the PC problem. Thus the underlying distributions of the problem instances generated from the reduction must be close to the desired distributions in total variation under both the null and alternative hypotheses. To this end, we start with a small dense graph generated from under and under , and arrive at a large sparse graph whose distribution is exactly under and approximately equal to under . Notice that simply sparsifying the PC problem does not capture the desired tradeoff between the graph sparsity and the cluster size. Our reduction scheme differs from those used in [11, 30] which start with a large dense graph. Similar to ours, the reduction scheme in [3] also enlarges and sparsifies the graph by taking its subset power; but the distributions of the resulting random graphs are rather complicated and not close to the Erdős-Rényi type.

Inapproximability of the Dks Problem

The densest -subgraph (DKS) problem refers to finding the subgraph of vertices with the maximal number of edges. In view of the NP-hardness of the DKS problem which follows from the NP-hardness of MAXCLIQUE, it is of interest to consider an -factor approximation algorithm, which outputs a subgraph with vertices containing at least a -fraction of the number of edges in the densest -subgraph. Proving the NP-hardness of -approximation for DKS for any fixed is a longstanding open problem. See [3] for a comprehensive discussion. Assuming the PC Hypothesis holds with , [3] shows that the DKS problem is hard to approximate within any constant factor even if the densest -subgraph is a clique of size for any , where denotes the total number of vertices. This worst-case inapproximability result is in stark contrast to the average-case behavior in the planted dense subgraph model under the scaling eq:scaling, where it is known [15, 5] that the planted dense subgraph can be exactly recovered in polynomial time if (see the simple region in Fig. 2 below), implying that the densest -subgraph can be approximated within a factor of in polynomial time for any . On the other hand, our computational lower bound for shows that any constant-factor approximation of the densest -subgraph has high average-case hardness if (see sec:recovery).

Variants of Pds Model

Three versions of the PDS model were considered in [12, Section 3]

. Under all three the graph under the null hypothesis is the Erdős-Rényi graph. The versions of the alternative hypothesis, in order of increasing difficulty of detection, are: (1) The

random planted model, such that the graph under the alternative hypothesis is obtained by generating an Erdős-Rényi graph, selecting nodes arbitrarily, and then resampling the edges among the nodes with a higher probability to form a denser Erdős-Rényi subgraph. This is somewhat more difficult to detect than the model of [8, 38], for which the choice of which nodes are in the planted dense subgraph is made before any edges of the graph are independently, randomly generated. (2) The dense in random model, such that both the nodes and edges of the planted dense -subgraph are arbitrary; (3) The dense versus random model, such that the entire graph under the alternative hypothesis could be an arbitrary graph containing a dense -subgraph. Our PDS

model is closely related to the first of these three versions, the key difference being that for our model the size of the planted dense subgraph is binomially distributed with mean

(see Section 4.2). Thus, our hardness result is for the easiest type of detection problem. A bipartite graph variant of the PDS model is used in [9, p. 10] for financial applications where the total number of edges is the same under both the null and alternative hypothesis. A hypergraph variant of the PDS problem is used in [7] for cryptographic applications.

1.3 Notations

For any set , let denote its cardinality. Let . For any positive integer , let . For , let and . We use standard big notations, e.g., for any sequences and , if there is an absolute constant such that . Let

denote the Bernoulli distribution with mean

and denote the binomial distribution with trials and success probability

. For random variables

, we write if is independent with . For probability measures and , let denote the total variation distance and the -divergence. The distribution of a random variable is denoted by . We write if . All logarithms are natural unless the base is explicitly specified.

2 Statistical Limits

This section determines the statistical limit for the problem with for a fixed constant . For a given pair , one can ask the question: What is the smallest density such that it is possible to reliably detect the planted dense subgraph? When the subgraph size is deterministic, this question has been thoroughly investigated by Arias-Castro and Verzelen [8, 38] for general and the statistical limit with sharp constants has obtained in certain asymptotic regime. Their analysis treats the dense regime [8] and sparse regime [38] separately. Here as we focus on the special case of

and are only interested in characterizations within absolute constants, we provide a simple non-asymptotic analysis which treats the dense and sparse regimes in a unified manner. Our results demonstrate that the

problem in def:HypTesting has the same statistical detection limit as the problem with a deterministic size studied in [8, 38].

2.1 Lower Bound

By the definition of the total variation distance, the optimal testing error probability is determined by the total variation distance between the distributions under the null and the alternative hypotheses:

The following result (proved in sec:pf-lb) shows that if , then there exists no test which can detect the planted subgraph reliably.

Proposition 1.

Suppose for some constant . There exists a function satisfying such that the following holds: For any , and ,


2.2 Upper Bound

Let denote the adjacency matrix of the graph

. The detection limit can be achieved by the linear test statistic and scan test statistic proposed in

[8, 38]:


which correspond to the total number of edges in the whole graph and the densest -subgraph, respectively. Interestingly, the exact counterparts of these tests have been proposed and shown to be minimax optimal for detecting submatrices in Gaussian noise [13, 28, 30]. The following lemma bounds the error probabilities of the linear and scan test.

Proposition 2.

Suppose for a constant . For the linear test statistic, set . For the scan test statistic, set . Then there exists a constant which only depends on such that

To illustrate the implications of the above lower and upper bounds, consider the problem with the parametrization , and for and and . In this asymptotic regime, the fundamental detection limit is characterized by the following function


which gives the statistical boundary in fig:phase. Indeed, if , as a consequence of prop:lowerbound, for any sequence of tests. Conversely, if , then prop:upperbound implies that the test achieves vanishing Type-I+II error probabilities. More precisely, the linear test succeeds in the regime , while the scan test succeeds in the regime .

Note that can be computed in linear time. However, computing amounts to enumerating all subsets of of cardinality , which can be computationally intensive. Therefore it is unclear whether there exists a polynomial-time solver in the regime . Assuming the PC Hypothesis, this question is resolved in the negative in the next section.

3 Computational Lower Bounds

In this section, we establish the computational lower bounds for the PDS problem assuming the intractability of the planted clique problem. We show that the PDS problem can be approximately reduced from the PC problem of appropriately chosen parameters in randomized polynomial time. Based on this reduction scheme, we establish a formal connection between the PC problem and the PDS problem in prop:reduction, and the desired computational lower bounds follow as thm:main.

We aim to reduce the problem to the problem. For simplicity, we focus on the case of ; the general case follows similarly with a change in some numerical constants that come up in the proof. We are given an adjacency matrix , or equivalently, a graph and with the help of additional randomness, will map it to an adjacency matrix or equivalently, a graph such that the hypothesis (resp. ) in def:PlantedCliqueDetection is mapped to exactly (resp.  approximately) in def:HypTesting. In other words, if is drawn from , then is distributed according to ; If is drawn from , then the distribution of is close in total variation to .

Our reduction scheme works as follows. Each vertex in is randomly assigned a parent vertex in with the choice of parent being made independently for different vertices in and uniformly over the set of vertices in Let denote the set of vertices in with parent and let . Then the set of children nodes form a random partition of . For any the number of edges, , from vertices in to vertices in in

will be selected randomly with a conditional probability distribution specified below. Given

the particular set of edges with cardinality is chosen uniformly at random.

It remains to specify, for the conditional distribution of given and Ideally, conditioned on and , we want to construct a Markov kernel from to which maps to the desired edge distribution , and to , depending on whether both and are in the clique or not, respectively. Such a kernel, unfortunately, provably does not exist. Nonetheless, this objective can be accomplished approximately in terms of the total variation. For let For , denote and . Fix and put . Define

where . Let . As we show later, and are well-defined probability distributions as long as and , where . Then, for let the conditional distribution of given and be given by


The next proposition (proved in sec:pf-reduction) shows that the randomized reduction defined above maps into under the null hypothesis and approximately into under the alternative hypothesis, respectively. The intuition behind the reduction scheme is as follows: By construction, and therefore the null distribution of the PC problem is exactly matched to that of the PDS problem, i.e., . The core of the proof lies in establishing that the alternative distributions are approximately matched. The key observation is that is close to and thus for nodes with distinct parents in the planted clique, the number of edges is approximately distributed as the desired ; for nodes with the same parent in the planted clique, even though is distributed as which is not sufficiently close to the desired , after averaging over the random partition , the total variation distance becomes negligible.

Proposition 3.

Let , and . Let , , and . Assume that and . If , then , i.e., . If , then


An immediate consequence of prop:reduction is the following result (proved in sec:pf-test) showing that any solver induces a solver for a corresponding instance of the problem.

Proposition 4.

Let the assumption of prop:reduction hold. Suppose is a test for with Type-I+II error probability . Then is a test for the whose Type-I+II error probability is upper bounded by with given by the right-hand side of eq:defxi.

The following theorem establishes the computational limit of the problem as shown in fig:phase.

Theorem 1.

Assume hyp:HypothesisPlantedClique holds for a fixed . Let . Let and be such that


Then there exists a sequence satisfying

such that for any sequence of randomized polynomial-time tests for the problem, the Type-I+II error probability is lower bounded by

where under and under . Consequently, if hyp:HypothesisPlantedClique holds for all , then the above holds for all and such that

Remark 1.

Consider the asymptotic regime given by eq:scaling. The function in eq:computationallimit gives the computational barrier for the problem (see fig:phase). Compared to the statistical limit given in eq:detectionlimit, we note that if and only if , in which case computational efficiency incurs a significant penalty on the detection performance. Interestingly, this phenomenon is in line with the observation reported in [30] for the noisy submatrix detection problem, where the statistical limit can be attained if and only if the submatrix size exceeds the power of the matrix size.

4 Extensions and Open Problems

In this section, we discuss the extension of our results to: (1) the planted dense subgraph recovery and DKS problem; (2) the PDS problem where the planted dense subgraph has a deterministic size. (3) the bipartite PDS problem;

4.1 Recovering Planted Dense Subgraphs and Dks Problem

Closely related to the PDS detection problem is the recovery problem, where given a graph generated from , the task is to recover the planted dense subgraph. As a consequence of our computational lower bound for detection, we discuss implications on the tractability of the recovery problem as well as the closely related problem as illustrated in fig:planteddensesubgraph.

Consider the asymptotic regime of eq:scaling, where it has been shown [15, 5] that recovery is possible if and only if and . Note that in this case the recovery problem is harder than finding the , because if the planted dense subgraph is recovered with high probability, we can obtain a -approximation of the densest -subgraph for any in polynomial time.222If the planted dense subgraph size is smaller than , output any -subgraph containing it; otherwise output any of its -subgraph. Results in [15, 5] imply that the planted dense subgraph can be recovered in polynomial time in the simple (green) regime of fig:planteddensesubgraph where . Consequently -approximation of the DKS can be found efficiently in this regime.

Conversely, given a polynomial time -factor approximation algorithm to the DKS problem with the output , we can distinguish versus if and in polynomial time as follows: Fix any positive such that . Declare if the density of is larger than and otherwise. Assuming , one can show that the density of is at most under and at least under . Hence, our computational lower bounds for the PC problem imply that the densest -subgraph as well as the planted dense subgraph is hard to approximate to any constant factor if (the red regime in fig:phase). Whether is hard to approximate with any constant factor in the blue regime of is left as an interesting open problem.





Figure 2: The simple (green), hard (red), impossible (gray) regimes for recovering planted dense subgraphs, and the hardness in the blue regime remains open.

4.2 Pds Problem with a Deterministic Size

In the PDS problem with a deterministic size , the null distribution corresponds to the Erdős-Rényi graph ; under the alternative, we choose vertices uniformly at random to plant a dense subgraph with edge probability . Although the subgraph size under our PDS model is binomially distributed, which, in the asymptotic regime eq:scaling, is sharply concentrated near its mean , it is not entirely clear whether these two models are equivalent. Although our reduction scheme in sec:computationallimits extends to the fixed-size model with being the random -partition of with for all , so far we have not been able to prove the alternative distributions are approximately matched: The main technical hurdle lies in controlling the total variation between the distribution of after averaging over the random -partition and the desired distribution.

Nonetheless, our result on the hardness of solving the PDS problem extends to the case of deterministic dense subgraph size if the tests are required to be monotone. (A test is monotone if implies whenever is obtained by adding edges to .) It is intuitive to assume that any reasonable test should be more likely to declare the existence of the planted dense subgraph if the graph contains more edges, such as the linear and scan test defined in eq:tests. Moreover, by the monotonicity of the likelihood ratio, the statistically optimal test is also monotone. If we restrict our scope to monotone tests, then our computational lower bound implies that for the PDS problem with a deterministic size, there is no efficiently computable monotone test in the hard regime of in fig:phase. In fact, for a given monotone polynomial-time solver for the PDS problem with size , the can be solved by in polynomial time because with high probability the planted dense subgraph is of size at least . It is an interesting open problem to prove the computational lower bounds without restricting to monotone tests, or prove the optimal polynomial-time tests are monotone. We conjecture that the computational limit of of fixed size is identical to that of the random size, which can indeed by established in the bipartite case as discussed in the next subsection.

Finally, we can show that the PDS recovery problem with a deterministic planted dense subgraph size is computationally intractable if (the red regime in fig:phase). This follows from the fact that given a polynomial-time algorithm for the PDS recovery problem with size , we can construct a polynomial-time solver for if (See Appendix B for a formal statement and the proof).

4.3 Bipartite Pds Problem

Let denote the bipartite Erdős-Rényi random graph model with top vertices and bottom vertices. Let denote the bipartite variant of the planted densest subgraph model in def:HypTesting with a planted dense subgraph of top vertices and bottom vertices on average. The bipartite PDS problem with parameters , denoted by , refers to the problem of testing versus .

Consider the asymptotic regime of eq:scaling. Following the arguments in sec:statisticallimits, one can show that the statistical limit is given by defined in eq:detectionlimit. To derive computational lower bounds, we use the reduction from the bipartite PC problem with parameters , denoted by , which tests versus , where is the bipartite variant of the planted clique model with a planted bi-clique of size . The BPC Hypothesis refers to the assumption that for some constant , no sequence of randomized polynomial-time tests for BPC succeeds if . The reduction scheme from to is analogue to the scheme used in non-bipartite case. The proof of computational lower bounds in bipartite graph is much simpler. In particular, under the null hypothesis, and one can verify that . Under the alternative hypothesis, . lmm:TotalVariationBound directly implies that the total variation distance between the distribution of and is on the order of . Then, following the arguments in prop:test and thm:main, we conclude that if the BPC Hypothesis holds for any positive , then no efficiently computable test can solve in the regime given by eq:computationallimit. The same conclusion also carries over to the bipartite PDS problem with a deterministic size and the statistical and computational limits shown in fig:phase apply verbatim.


Appendix A Proofs

a.1 Proof of prop:lowerbound


Let denote the distribution of conditional on under the alternative hypothesis. Since , by the Chernoff bound, . Therefore,


where the first inequality follows from the convexity of , Next we condition on for a fixed . Then

is uniformly distributed over all subsets of size

. Let be an independent copy of . Then . By the definition of the -divergence and Fubini’s theorem,

where is due to the fact that ; follows from lmm:H in app:H with an appropriate choice of function satisfying . Therefore, we get that


Combining eq:dtv1 and eq:dtv2 yields eq:lb-tv with . ∎

a.2 Proof of prop:upperbound


Let denote a constant whose value only depends on and may change line by line. Under , . By the Bernstein inequality,

Under , Since , by the Chernoff bound, . Conditional on for some , then is distributed as an independent sum of and . By the multiplicative Chernoff bound (see, e.g., [33, Theorem 4.5]),