1 Introduction
Networks often exhibit community structure with many edges joining the vertices of the same community and relatively few edges joining vertices of different communities. Detecting communities in networks has received a large amount of attention and has found numerous applications in social and biological sciences, etc (see, e.g., the exposition [23] and the references therein). While most previous work focuses on identifying the vertices in the communities, this paper studies the more basic problem of detecting the presence of a small community in a large random graph, proposed recently in [8]. This problem has practical applications including detecting new events and monitoring clusters, and is also of theoretical interest for understanding the statistical and algorithmic limits of community detection [15].
Inspired by the model in [8], we formulate this community detection problem as a planted dense subgraph detection (PDS) problem. Specifically, let denote the ErdősRényi random graph with vertices, where each pair of vertices is connected independently with probability . Let denote the planted dense subgraph model with vertices where: (1) each vertex is included in the random set independently with probability ; (2) for any two vertices, they are connected independently with probability if both of them are in and with probability otherwise, where . In this case, the vertices in form a community with higher connectivity than elsewhere. The planted dense subgraph here has a random size with mean , which is similar to the models adopted in [16, 34, 35, 22, 31], instead of a deterministic size as assumed in [8, 38, 15].
Definition 1.
The planted dense subgraph detection problem with parameters , henceforth denoted by , refers to the problem of distinguishing hypotheses:
The statistical difficulty of the problem depends on the parameters . Intuitively, if the expected dense subgraph size decreases, or if the edge probabilities and both decrease by the same factor, or if decreases for fixed, the distributions under the null and alternative hypotheses become less distinguishable. Recent results in [8, 38] obtained necessary and sufficient conditions for detecting planted dense subgraphs under certain assumptions of the parameters. However, it remains unclear whether the statistical fundamental limit can always be achieved by efficient procedures. In fact, it has been shown in [8, 38] that many popular lowcomplexity tests, such as total degree test, maximal degree test, dense subgraph test, as well as tests based on certain convex relaxations, can be highly suboptimal. This observation prompts us to investigate the computational limits for the PDS problem, i.e., what is the sharp condition on under which the problem admits a computationally efficient test with vanishing error probability, and conversely, without which no algorithm can detect the planted dense subgraph reliably in polynomial time. To this end, we focus on a particular case where the community is denser by a constant factor than the rest of the graph, i.e., for some constant . Adopting the standard reduction approach in complexity theory, we show that the PDS problem in some parameter regime is at least as hard as the planted clique problem in some parameter regime, which is conjectured to be computationally intractable. Let denote the planted clique model in which we add edges to vertices uniformly chosen from to form a clique.
Definition 2.
The PC detection problem with parameters , denoted by henceforth, refers to the problem of distinguishing hypotheses:
The problem of finding the planted clique has been extensively studied for and the stateoftheart polynomialtime algorithms [4, 20, 32, 21, 17, 6, 18] only work for . There is no known polynomialtime solver for the PC problem for and any constant . It is conjectured [26, 25, 27, 2, 22] that the PC problem cannot be solved in polynomial time for with , which we refer to as the PC Hypothesis.
Hypothesis 1.
Fix some constant . For any sequence of randomized polynomialtime tests such that ,
The PC Hypothesis with is similar to [30, Hypothesis 1] and [11, Hypothesis ]. Our computational lower bounds require that the PC Hypothesis holds for any positive constant . An even stronger assumption that PC Hypothesis holds for has been used in [7, Theorem 10.3] for publickey cryptography. Furthermore, [22, Corollary 5.8] shows that under a statistical query model, any statistical algorithm requires at least queries for detecting the planted biclique in an ErdősRényi random bipartite graph with edge probability .
1.1 Main Results
We consider the problem in the following asymptotic regime:
(1) 
where is a fixed constant, governs the sparsity of the graph,^{1}^{1}1The case of is not interesting since detection is impossible even if the planted subgraph is the entire graph (). and captures the size of the dense subgraph. Clearly the detection problem becomes more difficult if either increases or decreases. Assuming the PC Hypothesis holds for any positive constant , we show that the parameter space of is partitioned into three regimes as depicted in fig:phase:

The Simple Regime: . The dense subgraph can be detected in linear time with high probability by thresholding the total number of edges.

The Hard Regime: . Reliable detection can be achieved by thresholding the maximum number of edges among all subgraphs of size ; however, no polynomialtime solver exists in this regime.

The Impossible Regime: . No test can detect the planted subgraph regardless of the computational complexity.
The computational hardness of the PDS problem exhibits a phase transition at the critical value : For moderately sparse graphs with , there exists a combinatorial algorithm that can detect far smaller communities than any efficient procedures; For highly sparse graphs with , optimal detection is achieved in linear time based on the total number of edges. Equivalently, attaining the statistical detection limit is computationally tractable only in the largecommunity regime (). Therefore, surprisingly, the lineartime test based on the total number of edges is always statistically optimal among all computationally efficient procedures in the sense that no polynomialtime algorithm can reliably detect the community when . It should be noted that fig:phase only captures the leading polynomial term according to the parametrization eq:scaling; at the boundary , it is plausible that one needs to go beyond simple edge counting in order to achieve reliable detection. This is analogous to the planted clique problem where the maximal degree test succeeds if the clique size satisfies [29] and the more sophisticated spectral method succeeds if [4].
The above hardness result should be contrasted with the recent study of community detection on the stochastic block model, where the community size scales linearly with the network size. When the edge density scales as [34, 35, 31] (resp. [1, 36, 24]), the statistically optimal threshold for partial (resp. exact) recovery can be attained in polynomial time up to the sharp constants. In comparison, this paper focuses on the regime when the community size grows sublinearly as and the edge density decays more slowly as . It turns out that in this case even achieving the optimal exponent is computationally as demanding as solving the planted clique problem.
Our computational lower bound for the PDS problem also implies the averagecase hardness of approximating the planted dense subgraph or the densest subgraph of the random graph ensemble , complementing the worstcase inapproximability result in [3], which is based on the planted clique hardness as well. In particular, we show that no polynomialtime algorithm can approximate the planted dense subgraph or the densest subgraph within any constant factor in the regime of , which provides a partial answer to the conjecture made in [15, Conjecture 2.6] and the open problem raised in [3, Section 4] (see sec:recovery). Our approach and results can be extended to the bipartite graph case (see sec:bipartite) and shed light on the computational limits of the PDS problem with a fixed planted dense subgraph size studied in [8, 38] (see sec:fixedsize).
1.2 Connections to the Literature
This work is inspired by an emerging line of research (see, e.g., [28, 10, 11, 14, 30, 15, 39]) which examines highdimensional inference problems from both the statistical and computational perspectives. Our computational lower bounds follow from a randomized polynomialtime reduction scheme which approximately reduces the PC problem to the PDS problem of appropriately chosen parameters. Below we discuss the connections to previous results and highlight the main technical contributions of this paper.
Pc Hypothesis
Various hardness results in the theoretical computer science literature have been established based on the PC Hypothesis with , e.g. cryptographic applications [27], approximating Nash equilibrium [25], testing wise independence [2], etc. More recently, the PC Hypothesis with
has been used to investigate the penalty incurred by complexity constraints on certain highdimensional statistical inference problems, such as detecting sparse principal components
[11] and noisy biclustering (submatrix detection) [30]. Compared with most previous works, our computational lower bounds rely on the stronger assumption that the PC Hypothesis holds for any positive constant . An even stronger assumption that PC Hypothesis holds for has been used in [7] for publickey cryptography. It is an interesting open problem to prove that PC Hypothesis for a fixed follows from that for .Reduction from the Pc Problem
Most previous work [25, 2, 3, 7] in the theoretical computer science literature uses the reduction from the PC problem to generate computationally hard instances of problems and establish worstcase hardness results; the underlying distributions of the instances could be arbitrary. Similarly, in the recent works [11, 30] on the computational limits of certain minimax inference problems, the reduction from the PC problem is used to generate computationally hard but statistically feasible instances of their problems; the underlying distributions of the instances can also be arbitrary as long as they are valid priors on the parameter spaces. In contrast, here our goal is to establish the averagecase hardness of the PDS problem based on that of the PC problem. Thus the underlying distributions of the problem instances generated from the reduction must be close to the desired distributions in total variation under both the null and alternative hypotheses. To this end, we start with a small dense graph generated from under and under , and arrive at a large sparse graph whose distribution is exactly under and approximately equal to under . Notice that simply sparsifying the PC problem does not capture the desired tradeoff between the graph sparsity and the cluster size. Our reduction scheme differs from those used in [11, 30] which start with a large dense graph. Similar to ours, the reduction scheme in [3] also enlarges and sparsifies the graph by taking its subset power; but the distributions of the resulting random graphs are rather complicated and not close to the ErdősRényi type.
Inapproximability of the Dks Problem
The densest subgraph (DKS) problem refers to finding the subgraph of vertices with the maximal number of edges. In view of the NPhardness of the DKS problem which follows from the NPhardness of MAXCLIQUE, it is of interest to consider an factor approximation algorithm, which outputs a subgraph with vertices containing at least a fraction of the number of edges in the densest subgraph. Proving the NPhardness of approximation for DKS for any fixed is a longstanding open problem. See [3] for a comprehensive discussion. Assuming the PC Hypothesis holds with , [3] shows that the DKS problem is hard to approximate within any constant factor even if the densest subgraph is a clique of size for any , where denotes the total number of vertices. This worstcase inapproximability result is in stark contrast to the averagecase behavior in the planted dense subgraph model under the scaling eq:scaling, where it is known [15, 5] that the planted dense subgraph can be exactly recovered in polynomial time if (see the simple region in Fig. 2 below), implying that the densest subgraph can be approximated within a factor of in polynomial time for any . On the other hand, our computational lower bound for shows that any constantfactor approximation of the densest subgraph has high averagecase hardness if (see sec:recovery).
Variants of Pds Model
Three versions of the PDS model were considered in [12, Section 3]
. Under all three the graph under the null hypothesis is the ErdősRényi graph. The versions of the alternative hypothesis, in order of increasing difficulty of detection, are: (1) The
random planted model, such that the graph under the alternative hypothesis is obtained by generating an ErdősRényi graph, selecting nodes arbitrarily, and then resampling the edges among the nodes with a higher probability to form a denser ErdősRényi subgraph. This is somewhat more difficult to detect than the model of [8, 38], for which the choice of which nodes are in the planted dense subgraph is made before any edges of the graph are independently, randomly generated. (2) The dense in random model, such that both the nodes and edges of the planted dense subgraph are arbitrary; (3) The dense versus random model, such that the entire graph under the alternative hypothesis could be an arbitrary graph containing a dense subgraph. Our PDSmodel is closely related to the first of these three versions, the key difference being that for our model the size of the planted dense subgraph is binomially distributed with mean
(see Section 4.2). Thus, our hardness result is for the easiest type of detection problem. A bipartite graph variant of the PDS model is used in [9, p. 10] for financial applications where the total number of edges is the same under both the null and alternative hypothesis. A hypergraph variant of the PDS problem is used in [7] for cryptographic applications.1.3 Notations
For any set , let denote its cardinality. Let . For any positive integer , let . For , let and . We use standard big notations, e.g., for any sequences and , if there is an absolute constant such that . Let
denote the Bernoulli distribution with mean
and denote the binomial distribution with trials and success probability. For random variables
, we write if is independent with . For probability measures and , let denote the total variation distance and the divergence. The distribution of a random variable is denoted by . We write if . All logarithms are natural unless the base is explicitly specified.2 Statistical Limits
This section determines the statistical limit for the problem with for a fixed constant . For a given pair , one can ask the question: What is the smallest density such that it is possible to reliably detect the planted dense subgraph? When the subgraph size is deterministic, this question has been thoroughly investigated by AriasCastro and Verzelen [8, 38] for general and the statistical limit with sharp constants has obtained in certain asymptotic regime. Their analysis treats the dense regime [8] and sparse regime [38] separately. Here as we focus on the special case of
and are only interested in characterizations within absolute constants, we provide a simple nonasymptotic analysis which treats the dense and sparse regimes in a unified manner. Our results demonstrate that the
problem in def:HypTesting has the same statistical detection limit as the problem with a deterministic size studied in [8, 38].2.1 Lower Bound
By the definition of the total variation distance, the optimal testing error probability is determined by the total variation distance between the distributions under the null and the alternative hypotheses:
The following result (proved in sec:pflb) shows that if , then there exists no test which can detect the planted subgraph reliably.
Proposition 1.
Suppose for some constant . There exists a function satisfying such that the following holds: For any , and ,
(2) 
2.2 Upper Bound
Let denote the adjacency matrix of the graph
. The detection limit can be achieved by the linear test statistic and scan test statistic proposed in
[8, 38]:(3) 
which correspond to the total number of edges in the whole graph and the densest subgraph, respectively. Interestingly, the exact counterparts of these tests have been proposed and shown to be minimax optimal for detecting submatrices in Gaussian noise [13, 28, 30]. The following lemma bounds the error probabilities of the linear and scan test.
Proposition 2.
Suppose for a constant . For the linear test statistic, set . For the scan test statistic, set . Then there exists a constant which only depends on such that
To illustrate the implications of the above lower and upper bounds, consider the problem with the parametrization , and for and and . In this asymptotic regime, the fundamental detection limit is characterized by the following function
(4) 
which gives the statistical boundary in fig:phase. Indeed, if , as a consequence of prop:lowerbound, for any sequence of tests. Conversely, if , then prop:upperbound implies that the test achieves vanishing TypeI+II error probabilities. More precisely, the linear test succeeds in the regime , while the scan test succeeds in the regime .
Note that can be computed in linear time. However, computing amounts to enumerating all subsets of of cardinality , which can be computationally intensive. Therefore it is unclear whether there exists a polynomialtime solver in the regime . Assuming the PC Hypothesis, this question is resolved in the negative in the next section.
3 Computational Lower Bounds
In this section, we establish the computational lower bounds for the PDS problem assuming the intractability of the planted clique problem. We show that the PDS problem can be approximately reduced from the PC problem of appropriately chosen parameters in randomized polynomial time. Based on this reduction scheme, we establish a formal connection between the PC problem and the PDS problem in prop:reduction, and the desired computational lower bounds follow as thm:main.
We aim to reduce the problem to the problem. For simplicity, we focus on the case of ; the general case follows similarly with a change in some numerical constants that come up in the proof. We are given an adjacency matrix , or equivalently, a graph and with the help of additional randomness, will map it to an adjacency matrix or equivalently, a graph such that the hypothesis (resp. ) in def:PlantedCliqueDetection is mapped to exactly (resp. approximately) in def:HypTesting. In other words, if is drawn from , then is distributed according to ; If is drawn from , then the distribution of is close in total variation to .
Our reduction scheme works as follows. Each vertex in is randomly assigned a parent vertex in with the choice of parent being made independently for different vertices in and uniformly over the set of vertices in Let denote the set of vertices in with parent and let . Then the set of children nodes form a random partition of . For any the number of edges, , from vertices in to vertices in in
will be selected randomly with a conditional probability distribution specified below. Given
the particular set of edges with cardinality is chosen uniformly at random.It remains to specify, for the conditional distribution of given and Ideally, conditioned on and , we want to construct a Markov kernel from to which maps to the desired edge distribution , and to , depending on whether both and are in the clique or not, respectively. Such a kernel, unfortunately, provably does not exist. Nonetheless, this objective can be accomplished approximately in terms of the total variation. For let For , denote and . Fix and put . Define
where . Let . As we show later, and are welldefined probability distributions as long as and , where . Then, for let the conditional distribution of given and be given by
(5) 
The next proposition (proved in sec:pfreduction) shows that the randomized reduction defined above maps into under the null hypothesis and approximately into under the alternative hypothesis, respectively. The intuition behind the reduction scheme is as follows: By construction, and therefore the null distribution of the PC problem is exactly matched to that of the PDS problem, i.e., . The core of the proof lies in establishing that the alternative distributions are approximately matched. The key observation is that is close to and thus for nodes with distinct parents in the planted clique, the number of edges is approximately distributed as the desired ; for nodes with the same parent in the planted clique, even though is distributed as which is not sufficiently close to the desired , after averaging over the random partition , the total variation distance becomes negligible.
Proposition 3.
Let , and . Let , , and . Assume that and . If , then , i.e., . If , then
(6) 
An immediate consequence of prop:reduction is the following result (proved in sec:pftest) showing that any solver induces a solver for a corresponding instance of the problem.
Proposition 4.
Let the assumption of prop:reduction hold. Suppose is a test for with TypeI+II error probability . Then is a test for the whose TypeI+II error probability is upper bounded by with given by the righthand side of eq:defxi.
The following theorem establishes the computational limit of the problem as shown in fig:phase.
Theorem 1.
Assume hyp:HypothesisPlantedClique holds for a fixed . Let . Let and be such that
(7) 
Then there exists a sequence satisfying
such that for any sequence of randomized polynomialtime tests for the problem, the TypeI+II error probability is lower bounded by
where under and under . Consequently, if hyp:HypothesisPlantedClique holds for all , then the above holds for all and such that
(8) 
Remark 1.
Consider the asymptotic regime given by eq:scaling. The function in eq:computationallimit gives the computational barrier for the problem (see fig:phase). Compared to the statistical limit given in eq:detectionlimit, we note that if and only if , in which case computational efficiency incurs a significant penalty on the detection performance. Interestingly, this phenomenon is in line with the observation reported in [30] for the noisy submatrix detection problem, where the statistical limit can be attained if and only if the submatrix size exceeds the power of the matrix size.
4 Extensions and Open Problems
In this section, we discuss the extension of our results to: (1) the planted dense subgraph recovery and DKS problem; (2) the PDS problem where the planted dense subgraph has a deterministic size. (3) the bipartite PDS problem;
4.1 Recovering Planted Dense Subgraphs and Dks Problem
Closely related to the PDS detection problem is the recovery problem, where given a graph generated from , the task is to recover the planted dense subgraph. As a consequence of our computational lower bound for detection, we discuss implications on the tractability of the recovery problem as well as the closely related problem as illustrated in fig:planteddensesubgraph.
Consider the asymptotic regime of eq:scaling, where it has been shown [15, 5] that recovery is possible if and only if and . Note that in this case the recovery problem is harder than finding the , because if the planted dense subgraph is recovered with high probability, we can obtain a approximation of the densest subgraph for any in polynomial time.^{2}^{2}2If the planted dense subgraph size is smaller than , output any subgraph containing it; otherwise output any of its subgraph. Results in [15, 5] imply that the planted dense subgraph can be recovered in polynomial time in the simple (green) regime of fig:planteddensesubgraph where . Consequently approximation of the DKS can be found efficiently in this regime.
Conversely, given a polynomial time factor approximation algorithm to the DKS problem with the output , we can distinguish versus if and in polynomial time as follows: Fix any positive such that . Declare if the density of is larger than and otherwise. Assuming , one can show that the density of is at most under and at least under . Hence, our computational lower bounds for the PC problem imply that the densest subgraph as well as the planted dense subgraph is hard to approximate to any constant factor if (the red regime in fig:phase). Whether is hard to approximate with any constant factor in the blue regime of is left as an interesting open problem.
4.2 Pds Problem with a Deterministic Size
In the PDS problem with a deterministic size , the null distribution corresponds to the ErdősRényi graph ; under the alternative, we choose vertices uniformly at random to plant a dense subgraph with edge probability . Although the subgraph size under our PDS model is binomially distributed, which, in the asymptotic regime eq:scaling, is sharply concentrated near its mean , it is not entirely clear whether these two models are equivalent. Although our reduction scheme in sec:computationallimits extends to the fixedsize model with being the random partition of with for all , so far we have not been able to prove the alternative distributions are approximately matched: The main technical hurdle lies in controlling the total variation between the distribution of after averaging over the random partition and the desired distribution.
Nonetheless, our result on the hardness of solving the PDS problem extends to the case of deterministic dense subgraph size if the tests are required to be monotone. (A test is monotone if implies whenever is obtained by adding edges to .) It is intuitive to assume that any reasonable test should be more likely to declare the existence of the planted dense subgraph if the graph contains more edges, such as the linear and scan test defined in eq:tests. Moreover, by the monotonicity of the likelihood ratio, the statistically optimal test is also monotone. If we restrict our scope to monotone tests, then our computational lower bound implies that for the PDS problem with a deterministic size, there is no efficiently computable monotone test in the hard regime of in fig:phase. In fact, for a given monotone polynomialtime solver for the PDS problem with size , the can be solved by in polynomial time because with high probability the planted dense subgraph is of size at least . It is an interesting open problem to prove the computational lower bounds without restricting to monotone tests, or prove the optimal polynomialtime tests are monotone. We conjecture that the computational limit of of fixed size is identical to that of the random size, which can indeed by established in the bipartite case as discussed in the next subsection.
Finally, we can show that the PDS recovery problem with a deterministic planted dense subgraph size is computationally intractable if (the red regime in fig:phase). This follows from the fact that given a polynomialtime algorithm for the PDS recovery problem with size , we can construct a polynomialtime solver for if (See Appendix B for a formal statement and the proof).
4.3 Bipartite Pds Problem
Let denote the bipartite ErdősRényi random graph model with top vertices and bottom vertices. Let denote the bipartite variant of the planted densest subgraph model in def:HypTesting with a planted dense subgraph of top vertices and bottom vertices on average. The bipartite PDS problem with parameters , denoted by , refers to the problem of testing versus .
Consider the asymptotic regime of eq:scaling. Following the arguments in sec:statisticallimits, one can show that the statistical limit is given by defined in eq:detectionlimit. To derive computational lower bounds, we use the reduction from the bipartite PC problem with parameters , denoted by , which tests versus , where is the bipartite variant of the planted clique model with a planted biclique of size . The BPC Hypothesis refers to the assumption that for some constant , no sequence of randomized polynomialtime tests for BPC succeeds if . The reduction scheme from to is analogue to the scheme used in nonbipartite case. The proof of computational lower bounds in bipartite graph is much simpler. In particular, under the null hypothesis, and one can verify that . Under the alternative hypothesis, . lmm:TotalVariationBound directly implies that the total variation distance between the distribution of and is on the order of . Then, following the arguments in prop:test and thm:main, we conclude that if the BPC Hypothesis holds for any positive , then no efficiently computable test can solve in the regime given by eq:computationallimit. The same conclusion also carries over to the bipartite PDS problem with a deterministic size and the statistical and computational limits shown in fig:phase apply verbatim.
References
 [1] E. Abbe, A. S. Bandeira, and G. Hall. Exact recovery in the stochastic block model. Arxiv preprint arXiv:1405.3267, 2014.

[2]
N. Alon, A. Andoni, T. Kaufman, K. Matulef, R. Rubinfeld, and N. Xie.
Testing wise and almost wise independence.
In
Proceedings of the thirtyninth annual ACM symposium on Theory of computing
, pages 496–505. ACM, 2007.  [3] N. Alon, S. Arora, R. Manokaran, D. Moshkovitz, and O. Weinstein. Inapproximabilty of densest subgraph from average case hardness. Manuscript, available at https://www.nada.kth.se/~rajsekar/papers/dks.pdf, 2011.
 [4] N. Alon, M. Krivelevich, and B. Sudakov. Finding a large hidden clique in a random graph. Random Structures and Algorithms, 13(34), 1998.
 [5] B. P. Ames. Robust convex relaxation for the planted clique and densest subgraph problems. arXiv:1305.4891, 2013.
 [6] B. P. Ames and S. A. Vavasis. Nuclear norm minimization for the planted clique and biclique problems. Mathematical programming, 129(1):69–89, 2011.
 [7] B. Applebaum, B. Barak, and A. Wigderson. Publickey cryptography from different assumptions. In Proceedings of the Fortysecond ACM Symposium on Theory of Computing, STOC ’10, pages 171–180, 2010. http://www.cs.princeton.edu/~boaz/Papers/ncpkcFull1.pdf.
 [8] E. AriasCastro and N. Verzelen. Community detection in dense random networks. The Annals of Statistics, 42(3):940–969, 06 2014.
 [9] S. Arora, B. Barak, M. Brunnermeier, and R. Ge. Computational complexity and information asymmetry in financial products. In Innovations in Computer Science (ICS 2010), pages 49–65, 2010. http://www.cs.princeton.edu/~rongge/derivativelatest.pdf.
 [10] S. Balakrishnan, M. Kolar, A. Rinaldo, A. Singh, and L. Wasserman. Statistical and computational tradeoffs in biclustering. In NIPS 2011 Workshop on Computational Tradeoffs in Statistical Learning.
 [11] Q. Berthet and P. Rigollet. Complexity theoretic lower bounds for sparse principal component detection. J. Mach. Learn. Res., 30:1046–1066 (electronic), 2013.
 [12] A. Bhaskara, M. Charikar, E. Chlamtac, U. Feige, and A. Vijayaraghavan. Detecting high logdensities: An approximation for densest subgraph. In Proceedings of the Fortysecond ACM Symposium on Theory of Computing, STOC ’10, pages 201–210, 2010.
 [13] C. Butucea and Y. I. Ingster. Detection of a sparse submatrix of a highdimensional noisy matrix. Bernoulli, 19(5B):2652–2688, 11 2013.
 [14] V. Chandrasekaran and M. I. Jordan. Computational and statistical tradeoffs via convex relaxation. PNAS, 110(13):E1181–E1190, 2013.
 [15] Y. Chen and J. Xu. Statisticalcomputational tradeoffs in planted problems and submatrix localization with a growing number of clusters and submatrices. arXiv:1402.1267, 2014.
 [16] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborova. Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Physics Review E, 84:066106, 2011.
 [17] Y. Dekel, O. GurelGurevich, and Y. Peres. Finding hidden cliques in linear time with high probability. arxiv:1010.2997, 2010.
 [18] Y. Deshpande and A. Montanari. Finding hidden cliques of size in nearly linear time. arxiv:1304.7047, 2012.
 [19] D. Dubhashi and D. Ranjan. Balls and bins: A study in negative dependence. Random Structures and Algorithms, 13(2):99–124, 1998.
 [20] U. Feige and R. Krauthgamer. Finding and certifying a large hidden clique in a semirandom graph. Random Structures & Algorithms, 16(2):195–208, 2000.
 [21] U. Feige and D. Ron. Finding hidden cliques in linear time. In 21st International Meeting on Probabilistic, Combinatorial, and Asymptotic Methods in the Analysis of Algorithms (AofA 10), Discrete Math. Theor. Comput. Sci. Proc., AM, pages 189–203, 2010.
 [22] V. Feldman, E. Grigorescu, L. Reyzin, S. Vempala, and Y. Xiao. Statistical algorithms and a lower bound for detecting planted cliques. In Proceedings of the 45th annual ACM symposium on Symposium on theory of computing, pages 655–664, 2013.
 [23] S. Fortunato. Community detection in graphs. Physics Reports, 486(3):75–174, 2010.
 [24] B. Hajek, Y. Wu, and J. Xu. Achieving exact cluster recovery threshold via semidefinite programming. preprint, arxiv:1412.6156, Nov 2014.
 [25] E. Hazan and R. Krauthgamer. How hard is it to approximate the best Nash equilibrium? SIAM Journal on Computing, 40(1):79–91, 2011.
 [26] M. Jerrum. Large cliques elude the metropolis process. Random Structures & Algorithms, 3(4):347–359, 1992.
 [27] A. Juels and M. Peinado. Hiding cliques for cryptographic security. Designs, Codes & Crypto., 2000.
 [28] M. Kolar, S. Balakrishnan, A. Rinaldo, and A. Singh. Minimax localization of structural information in large noisy matrices. In NIPS, 2011.
 [29] L. Kučera. Expected complexity of graph partitioning problems. Discrete Applied Mathematics, 57(2):193–212, 1995.
 [30] Z. Ma and Y. Wu. Computational barriers in minimax submatrix detection. to appear in The Annals of Statistics, arXiv:1309.5914, 2015.
 [31] L. Massoulié. Community detection thresholds and the weak Ramanujan property. arxiv:1109.3318, 2013.
 [32] F. McSherry. Spectral partitioning of random graphs. In FOCS, pages 529 – 537, 2001.
 [33] M. Mitzenmacher and E. Upfal. Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, New York, NY, USA, 2005.
 [34] E. Mossel, J. Neeman, and A. Sly. Stochastic block models and reconstruction. available at: http://arxiv.org/abs/1202.1499, 2012.
 [35] E. Mossel, J. Neeman, and A. Sly. A proof of the block model threshold conjecture. arxiv:1311.4115, 2013.
 [36] E. Mossel, J. Neeman, and A. Sly. Consistency thresholds for binary symmetric block models. Arxiv preprint arXiv:1407.1591, 2014.

[37]
R. Vershynin.
A simple decoupling inequality in probability theory.
Manuscript, available at http://wwwpersonal.umich.edu/~romanv/papers/decouplingsimple.pdf, 2011.  [38] N. Verzelen and E. AriasCastro. Community detection in sparse random networks. arXiv:1308.2955, 2013.
 [39] J. Xu, R. Wu, K. Zhu, B. Hajek, R. Srikant, and L. Ying. Jointly clustering rows and columns of binary matrices: Algorithms and tradeoffs. SIGMETRICS Perform. Eval. Rev., 42(1):29–41, June 2014.
Appendix A Proofs
a.1 Proof of prop:lowerbound
Proof.
Let denote the distribution of conditional on under the alternative hypothesis. Since , by the Chernoff bound, . Therefore,
(9) 
where the first inequality follows from the convexity of , Next we condition on for a fixed . Then
is uniformly distributed over all subsets of size
. Let be an independent copy of . Then . By the definition of the divergence and Fubini’s theorem,where is due to the fact that ; follows from lmm:H in app:H with an appropriate choice of function satisfying . Therefore, we get that
(10) 
Combining eq:dtv1 and eq:dtv2 yields eq:lbtv with . ∎
a.2 Proof of prop:upperbound
Proof.
Let denote a constant whose value only depends on and may change line by line. Under , . By the Bernstein inequality,
Under , Since , by the Chernoff bound, . Conditional on for some , then is distributed as an independent sum of and . By the multiplicative Chernoff bound (see, e.g., [33, Theorem 4.5]),
Comments
There are no comments yet.