Computational and Statistical Boundaries for Submatrix Localization in a Large Noisy Matrix

02/06/2015
by   T. Tony Cai, et al.
0

The interplay between computational efficiency and statistical accuracy in high-dimensional inference has drawn increasing attention in the literature. In this paper, we study computational and statistical boundaries for submatrix localization. Given one observation of (one or multiple non-overlapping) signal submatrix (of magnitude λ and size k_m × k_n) contaminated with a noise matrix (of size m × n), we establish two transition thresholds for the signal to noise λ/σ ratio in terms of m, n, k_m, and k_n. The first threshold, SNR_c, corresponds to the computational boundary. Below this threshold, it is shown that no polynomial time algorithm can succeed in identifying the submatrix, under the hidden clique hypothesis. We introduce adaptive linear time spectral algorithms that identify the submatrix with high probability when the signal strength is above the threshold SNR_c. The second threshold, SNR_s, captures the statistical boundary, below which no method can succeed with probability going to one in the minimax sense. The exhaustive search method successfully finds the submatrix above this threshold. The results show an interesting phenomenon that SNR_c is always significantly larger than SNR_s, which implies an essential gap between statistical optimality and computational efficiency for submatrix localization.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

05/21/2020

Tensor Clustering with Planted Structures: Statistical Optimality and Computational Limits

This paper studies the statistical and computational limits of high-orde...
07/14/2019

More Supervision, Less Computation: Statistical-Computational Tradeoffs in Weakly Supervised Learning

We consider the weakly supervised binary classification problem where th...
01/22/2022

Optimal Estimation and Computational Limit of Low-rank Gaussian Mixtures

Structural matrix-variate observations routinely arise in diverse fields...
05/31/2018

Statistical Problems with Planted Structures: Information-Theoretical and Computational Limits

Over the past few years, insights from computer science, statistical phy...
05/26/2022

Subspace clustering in high-dimensions: Phase transitions & Statistical-to-Computational gap

A simple model to study subspace clustering is the high-dimensional k-Ga...
02/21/2017

Phase Transitions of Spectral Initialization for High-Dimensional Nonconvex Estimation

We study a spectral initialization method that serves a key role in rece...
06/14/2021

iNNformant: Boundary Samples as Telltale Watermarks

Boundary samples are special inputs to artificial neural networks crafte...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The “signal + noise” model

(1)

where is the signal of interest and is noise, is ubiquitous in statistics and is used in a wide range of applications. When and are matrices, many interesting problems arise under a variety of structural assumptions on and the distribution of

. Examples include sparse principal component analysis (PCA)

(Vu and Lei, 2012; Berthet and Rigollet, 2013b; Birnbaum et al., 2013; Cai et al., 2013, 2015), non-negative matrix factorization (Lee and Seung, 2001), non-negative PCA (Zass and Shashua, 2006; Montanari and Richard, 2014). Under the conventional statistical framework, one is looking for optimal statistical procedures for recovering the signal or detecting its presence.

As the dimensionality of the data becomes large, the computational concerns associated with statistical procedures come to the forefront. In particular, problems with a combinatorial structure or non-convex constraints pose a significant computational challenge because naive methods based on exhaustive search are typically not computationally efficient. Trade-off between computational efficiency and statistical accuracy in high-dimensional inference has drawn increasing attention in the literature. In particular, Chandrasekaran et al. (2012) and Wainwright (2014) considered a general class of linear inverse problems, with different emphasis on convex geometry and decomposition of statistical and computational errors. Chandrasekaran and Jordan (2013) studied an approach for trading off computational demands with statistical accuracy via relaxation hierarchies. Berthet and Rigollet (2013a); Ma and Wu (2013); Zhang et al. (2014) focused on computational requirements for various statistical problems, such as detection and regression.

In the present paper, we study the interplay between computational efficiency and statistical accuracy in submatrix localization based on a noisy observation of a large matrix. The problem considered in this paper is formalized as follows.

1.1 Problem Formulation

Consider the matrix of the form

(2)

and with on the index set and zero otherwise. Here, the entries

of the noise matrix are i.i.d. zero-mean sub-Gaussian random variables with parameter

(defined formally in Equation (4)). Given the parameters , the set of all distributions described above—for all possible choices of and —forms the submatrix model .

This model can be further extended to the case of multiple submatrices as

(3)

where and denote the support set of the -th submatrix. For simplicity, we first focus on the single submatrix and then extend the analysis to the model (3) in Section 2.5.

There are two fundamental questions associated with the submatrix model (2). One is the detection problem: given one observation of the matrix, decide whether it is generated from a distribution in the submatrix model or from the pure noise model. Precisely, the detection problem considers testing of the hypotheses

The other is the localization problem, where the goal is to exactly recover the signal index sets and (the support of the mean matrix ). It is clear that the localization problem is at least as hard (both computationally and statistically) as the detection problem. As we show in this paper, the localization problem requires larger signal to noise ratio , as well as a more detailed exploitation of the submatrix structure.

If the signal to noise ratio is sufficiently large, it is computationally easy to localize the submatrix. On the other hand, if this ratio is small, the localization problem is statistically impossible. To quantify this phenomenon, we identify two distinct thresholds ( and ) for in terms of parameters . The first threshold, , captures the statistical boundary, below which no method (possibly exponential time) can succeed with probability going to one in the minimax sense. The exhaustive search method successfully finds the submatrix above this threshold. The second threshold, , corresponds to the computational boundary, above which an adaptive (with respect to the parameters) linear time spectral algorithm finds the signal. Below this threshold, no polynomial time algorithm can succeed, under the hidden clique hypothesis, described later.

1.2 Prior Work

There is a growing body of work in statistical literature on submatrix problems. Shabalin et al. (2009) provided a fast iterative maximization algorithm to solve the submatrix localization problem. However, as with many EM type algorithms, the theoretical result is very sensitive to initialization. Arias-Castro et al. (2011) studied the detection problem for a cluster inside a large matrix. Butucea and Ingster (2013); Butucea et al. (2013) formulated the submatrix detection and localization problems under Gaussian noise and determined sharp statistical transition boundaries. For the detection problem, Ma and Wu (2013) provided a computational lower bound result under the assumption that hidden clique detection is computationally difficult.

Balakrishnan et al. (2011); Kolar et al. (2011) focused on statistical and computational trade-offs for the submatrix localization problem. They provided a computationally feasible entry-wise thresholding algorithm, a row/column averaging algorithm, and a convex relaxation for sparse SVD to investigate the minimum signal to noise ratio that is required in order to localize the submatrix. Under the sparse regime and , the entry-wise thresholding turns out to be the “near optimal” polynomial-time algorithm (which we will show a de-noised spectral algorithm that perform slightly better in Section 2.4). However, for the dense regime when and , the algorithms provided in Kolar et al. (2011) are not optimal in the sense that there are other polynomial-time algorithm that can succeed in finding the submatrix with smaller SNR. Concurrently with our work, Chen and Xu (2014) provided a convex relaxation algorithm that improves the SNR boundary of Kolar et al. (2011) in the dense regime. On the downside, the implementation of the method requires a full SVD on each iteration, and therefore does not scale well with the dimensionality of the problem. Furthermore, there is no computational lower bound in the literature to guarantee the optimality of the SNR boundary achieved in Chen and Xu (2014).

A problem similar to submatrix localization is that of clique finding. Deshpande and Montanari (2013) presented an iterative approximate message passing algorithm to solve the latter problem with sharp boundaries on SNR. However, in contrast to submatrix localization, where the signal submatrix can be located anywhere within the matrix, the clique finding problem requires the signal to be centered on the diagonal.

We would like to emphasize the difference between detection and localization problems. When

is a vector,

Donoho and Jin (2004) proposed the “higher criticism” approach to solve the detection problem under the Gaussian sequence model. Combining the results in (Donoho and Jin, 2004; Ma and Wu, 2013), in the computationally efficient region, there is no loss in treating in model (2) as a vector and applying the higher criticism method to the vectorized matrix for the problem of submatrix detection. In fact, the procedure achieves sharper constants in the Gaussian setting. However, in contrast to the detection problem, we will show that for localization, it is crucial to utilize the matrix structure, even in the computationally efficient region.

1.3 Notation

Let denote the index set . For a matrix , denotes its -th row and denotes its -th column. For any , denotes the submatrix corresponding to the index set . For a vector , and for a matrix , . When , the latter is the usual spectral norm, abbreviated as . The nuclear norm a matrix is defined as a convex surrogate for the rank, with the notation to be . The Frobenius norm of a matrix is defined as . The inner product associated with the Frobenius norm is defined as .

Denote the asymptotic notation if there exist two universal constants such that . is asymptotic equivalence hiding logarithmic factors in the following sense: iff there exists such that . Additionally, we use the notation as equivalent to , iff and iff .

We define the zero-mean sub-Gaussian random variable with sub-Gaussian parameter in terms of its Laplacian. If there exists a universal constant ,

(4)

then we have

We call a random vector isotropic with parameter if

Clearly, Gaussian and Bernoulli measures, and more general product measures of zero-mean sub-Gaussian random variables satisfy this isotropic definition up to a constant scalar factor.

1.4 Our Contributions

To state our main results, let us first define a hierarchy of algorithms in terms of their worst-case running time on instances of the submatrix localization problem:

The set contains algorithms that produce an answer (in our case, the localization subset ) in time linear in (the minimal computation required to read the matrix). The classes and of algorithms, respectively, terminate in polynomial and exponential time, while has no restriction.

Combining Theorem 3 and 4 in Section 2 and Theorem 5 in Section 3, the statistical and computational boundaries for submatrix localization can be summarized as follows.

Theorem 1 (Computational and Statistical Boundaries).

Consider the submatrix localization problem under the model (2). The computational boundary for the dense case when is

(5)

in the sense that

(6)
(7)

where (7) holds under the Hidden Clique hypothesis   (see Section 2.1). For the sparse case when , the computational boundary is , more precisely

The statistical boundary   is

(8)

in the sense that

(9)
(10)

under the minimal assumption .

If we parametrize the submatrix model as , for some , we can summarize the results of Theorem 1 in a phase diagram, as illustrated in Figure 1.

Figure 1: Phase diagram for submatrix localization. Red region (C): statistically impossible, where even without computational budget, the problem is hard. Blue region (B): statistically possible but computationally expensive (under the hidden clique hypothesis), where the problem is hard to all polynomial time algorithm but easy with exponential time algorithm. Green region (A): statistically possible and computationally easy, where a fast polynomial time algorithm will solve the problem.

To explain the diagram, consider the following cases. First, the statistical boundary is

which gives the line separating the red and the blue regions. For the dense regime , the computational boundary given by Theorem 1 is

which corresponds to the line separating the blue and the green regions. For the sparse regime , the computational boundary is , which is the horizontal line connecting to .

As a key part of Theorem 1, we provide various linear time spectral algorithms that will succeed in localizing the submatrix with high probability in the regime above the computational threshold. Furthermore, the method is adaptive: it does not require the prior knowledge of the size of the submatrix. This should be contrasted with the method of Chen and Xu (2014) which requires the prior knowledge of ; furthermore, the running time of their SDP-based method is superlinear in . Under the hidden clique hypothesis, we prove that below the computational threshold there is no polynomial time algorithm that can succeed in localizing the submatrix. This is a new result that has not been established in the literature. We remark that the computational lower bound for localization requires a technique different from the lower bound for detection; the latter has been resolved in Ma and Wu (2013).

Beyond localization of one single submatrix, we generalize both the computational and statistical story to a growing number of submatrices in Section 2.5. As mentioned earlier, the statistical boundary for one single submatrix localization has been investigated by Butucea et al. (2013) in the Gaussian case. Our result focuses on the computational intrinsic difficulty of localization for a growing number of submatrices, at the expense of not providing the exact constants for the thresholds.

The phase transition diagram in Figure 

1 for localization should be contrasted with the corresponding result for detection, as shown in (Butucea and Ingster, 2013; Ma and Wu, 2013). For a large enough submatrix size (as quantified by

), the computationally-intractable-but-statistically-possible region collapses for the detection problem, but not for localization. In plain words, detecting the presence of a large submatrix becomes both computationally and statistically easy beyond a certain size, while for localization there is always a gap between statistically possible and computationally feasible regions. This phenomenon also appears to be distinct to that of other problems like estimation of sparse principal components

(Cai et al., 2013), where computational and statistical easiness coincide with each other over a large region of the parameter spaces.

1.5 Organization of the Paper

The paper is organized as follows. Section 2 establishes the computational boundary, with the computational lower bounds given in Section 2.1 and upper bound results in Sections 2.2-2.4. An extension to the case of multiple submatrices is presented in Section 2.5. The upper and lower bounds for statistical boundary for multiple submatrices are discussed in Section 3. A discussion is given in Section 4. Technical proofs are deferred to Section 5. In addition to the spectral method given in Section 2.2 and 2.4, Appendix A contains a new analysis of a known method that is based on a convex relaxation (Chen and Xu, 2014). Comparison of computational lower bounds for localization and detection is included in Appendix B.

2 Computational Boundary

We characterize in this section the computational boundaries for the submatrix localization problem. Sections 2.1 and 2.2 consider respectively the computational lower bound and upper bound. The computational lower bound given in Theorem 2 is based on the hidden clique hypothesis.

2.1 Algorithmic Reduction and Computational Lower Bound

Theoretical Computer Science identifies a range problems which are believed to be “hard,” in the sense that in the worst-case the required computation grows exponentially with the size of the problem. Faced with a new computational problem, one might try to reduce any of the “hard” problems to the new problem, and therefore claim that the new problem is as hard as the rest in this family. Since statistical procedures typically deal with a random (rather than worst-case) input, it is natural to seek token problems that are believed to be computationally difficult on average with respect to some distribution on instances. The hidden clique problem is one such example (for recent results on this problem, see Feldman et al. (2013); Deshpande and Montanari (2013)). While there exists a quasi-polynomial algorithm, no polynomial-time method (for the appropriate regime, described below) is known. Following several other works on reductions for statistical problems, we work under the hypothesis that no polynomial-time method exists.

Let us make the discussion more precise. Consider the hidden clique model where is the total number of nodes and is the number of clique nodes. In the hidden clique model, a random graph instance is generated in the following way. Choose clique nodes uniformly at random from all the possible choices, and connect all the edges within the clique. For all the other edges, connect with probability .

Hidden Clique Hypothesis for Localization ()

Consider the random instance of hidden clique model . For any sequence such that for some , there is no randomized polynomial time algorithm that can find the planted clique with probability tending to as . Mathematically, define the randomized polynomial time algorithm class as the class of algorithms that satisfies

Then

where is the (possibly more detailed due to randomness of algorithm) -field conditioned on the clique location and

is with respect to uniform distribution over all possible clique locations.

Hidden Clique Hypothesis for Detection ()

Consider the hidden clique model . For any sequence of such that for some , there is no randomized polynomial time algorithm that can distinguish between

with probability going to as . Here is the Erdős-Rényi model, while is the hidden clique model with uniform distribution on all the possible locations of the clique. More precisely,

where and are the same as defined in .

The hidden clique hypothesis has been used recently by several authors to claim computational intractability of certain statistical problems. In particular, Berthet and Rigollet (2013a); Ma and Wu (2013) assumed the hypothesis and Wang et al. (2014) used . Localization is harder than detection, in the sense that if an algorithm solves the localization problem with high probability, it also correctly solves the detection problem. Assuming that no polynomial time algorithm can solve the detection problem implies impossibility results in localization as well. In plain language, is a milder hypothesis than .

We will provide two computational lower bound results, one for localization and the other for detection, in Theorems 2 and 6. The latter one will be deferred to Appendix B to contrast the difference of constructions between localization and detection. The detection computational lower bound was first proved in Ma and Wu (2013). For the localization computational lower bound, to the best of our knowledge, there is no proof in the literature. Theorem 2 ensures the upper bound in Lemma 1 being sharp.

Theorem 2 (Computational Lower Bound for Localization).

Consider the submatrix model (2) with parameter tuple , where . Under the computational assumption , if

it is not possible to localize the true support of the submatrix with probability going to within polynomial time.

Our algorithmic reduction for localization relies on a bootstrapping idea based on the matrix structure and a cleaning-up procedure introduced in Lemma 13 given in Section 5. These two key ideas offer new insights in addition to the usual computational lower bound arguments. Bootstrapping introduces an additional randomness on top of the randomness in the hidden clique. Careful examination of these two -fields allows us to write the resulting object into mixture of submatrix models. For submatrix localization we need to transform back the submatrix support to the original hidden clique support exactly, with high probability. In plain language, even though we lose track of the exact location of the support when reducing the hidden clique to submatrix model, we can still recover the exact location of the hidden clique with high probability. For technical details of the proof, please refer to Section 5.

2.2 Adaptive Spectral Algorithm and Computational Upper Bound

In this section, we introduce linear time algorithm that solves the submatrix localization problem above the computational boundary . Our proposed localization Algorithms 1 and 2 is motivated by the spectral algorithm in random graphs (McSherry, 2001; Ng et al., 2002).

Input: the data matrix.
Output: A subset of the row indexes and a subset of column indexes as the localization sets of the submatrix.
1. Compute top left and top right singular vectors and , respectively (these correspond to the SVD );
2. To compute , calculate the inner products . These values form two clusters. Similarly, for the , calculate and obtain two separated clusters. A simple thresholding procedure returns the subsets and .
Algorithm 1 Vanilla Spectral Projection Algorithm for Dense Regime

The proposed algorithm has several advantages over the localization algorithms that appeared in literature. First, it is a linear time algorithm (that is, time complexity). The top singular vectors can be evaluated using fast iterative power methods, which is efficient both in terms of space and time. Secondly, this algorithm does not require the prior knowledge of and and automatically adapts to the true submatrix size.

Lemma 1 below justifies the effectiveness of the spectral algorithm.

Lemma 1 (Guarantee for Spectral Algorithm).

Consider the submatrix model (2), Algorithm 1 and assume . There exist a universal such that when

the spectral method succeeds in the sense that with probability at least .

2.3 Dense Regime

We are now ready to state the SNR boundary for polynomial-time algorithms (under an appropriate computational assumption), thus excluding the exhaustive search procedure. The results hold under the dense regime when .

Theorem 3 (Computational Boundary for Dense Regime).

Consider the submatrix model (2) and assume . There exists a critical rate

for the signal to noise ratio such that for , both the adaptive linear time Algorithm 1 and the robust polynomial time Algorithm 5 will succeed in submatrix localization, i.e., , with high probability. For , there is no polynomial time algorithm that will work under the hidden clique hypothesis .

The proof of the above theorem is based on the theoretical justification of the spectral Algorithm 1 and convex relaxation Algorithm 5, and the new computational lower bound result for localization in Theorem 2. We remark that the analyses can be extended to multiple, even growing number of submatrices case. We postpone a proof of this fact to Section 2.5 for simplicity and focus on the case of a single submatrix.

2.4 Sparse Regime

Under the sparse regime when , a naive plug-in of Lemma 1 requires the to be larger than , which implies the vanilla spectral Algorithm 1 is outperformed by simple entrywise thresholding. However, a modified version with entrywise soft-thresholding as a preprocessing de-noising step turns out to provide near optimal performance in the sparse regime. Before we introduce the formal algorithm, let us define the soft-thresholding function at level to be

(11)

Soft-thresholding as a de-noising step achieving optimal bias-and-variance trade-off has been widely understood in the wavelet literature, for example, see

Donoho and Johnstone (1998).

Now we are ready to state the following de-noised spectral Algorithm 2 to localize the submatrix under the sparse regime when .

Input: the data matrix, a thresholding level .
Output: A subset of the row indexes and a subset of column indexes as the localization sets of the submatrix.
1. Soft-threshold each entry of the matrix at level , denote the resulting matrix as ;
2. Compute top left and top right singular vectors and of matrix , respectively (these correspond to the SVD );
3. To compute , calculate the inner products . These values form two clusters. Similarly, for the , calculate and obtain two separated clusters. A simple thresholding procedure returns the subsets and .
Algorithm 2 De-noised Spectral Algorithm for Sparse Regime

Lemma 2 below provides the theoretical guarantee for the above algorithm when .

Lemma 2 (Guarantee for De-noised Spectral Algorithm).

Consider the submatrix model (2), soft-thresholded spectral Algorithm 2 with thresholded level , and assume . There exist a universal such that when

the spectral method succeeds in the sense that with probability at least . Further if we choose as the optimal thresholding level, we have de-noised spectral algorithm works when

Combining the hidden clique hypothesis together with Lemma 2, we have the following theorem holds under the sparse regime when .

Theorem 4 (Computational Boundary for Sparse Regime).

Consider the submatrix model (2) and assume . There exists a critical rate for the signal to noise ratio between

such that for , the linear time Algorithm 2 will succeed in submatrix localization, i.e., , with high probability. For , there is no polynomial time algorithm that will work under the hidden clique hypothesis .

Remark 4.1.

The upper bound achieved by the de-noised spectral Algorithm 2 is optimal in the two boundary cases: and . When , both the information theoretic and computational boundary meet at . When , the computational lower bound and upper bound match in Theorem 4, thus suggesting the near optimality of Algorithm 2 within the polynomial time algorithm class. The potential logarithmic gap is due to the crudeness of the hidden clique hypothesis. Precisely, for , hidden clique is not only hard for with , but also hard for with . Similarly for , hidden clique is not only hard for with , but also for some .

2.5 Extension to Growing Number of Submatrices

The computational boundaries established in the previous sections for a single submatrix can be extended to non-overlapping multiple submatrices model (3). The non-overlapping assumption corresponds to that for any , and . The Algorithm 3 below is an extension of the spectral projection Algorithm 1 to address the multiple submatrices localization problem.

Input: the data matrix. A pre-specified number of submatrices .
Output: A subset of the row indexes and a subset of column indexes as the localization of the submatrices.
1. Calculate top left and right singular vectors in the SVD . Denote these vectors as and , respectively;
2. For the , calculate the projection , run -means clustering algorithm (with ) for these vectors in . For the , calculate , run -means clustering algorithm (with ) for these vectors in (while the effective dimension is ).
Algorithm 3 Spectral Algorithm for Multiple Submatrices

We emphasize that the following Proposition 3 holds even when the number of submatrices grows with .

Lemma 3 (Spectral Algorithm for Non-overlapping Submatrices Case).

Consider the non-overlapping multiple submatrices model (3) and Algorithm 3. Assume

for all and . There exist a universal such that when

(12)

the spectral method succeeds in the sense that with probability at least .

Remark 4.2.

Under the non-overlapping assumption, hold in most cases. Thus the first term in Equation (12) is dominated by the latter two terms. Thus a growing number does not affect the bound in Equation (12) as long as the non-overlapping assumption holds.

3 Statistical Boundary

In this section we study the statistical boundary. As mentioned in the introduction, in the Gaussian noise setting, the statistical boundary for a single submatrix localization has been established in Butucea et al. (2013). In this section, we generalize to localization of a growing number of submatrices, as well as sub-Gaussian noise, at the expense of having non-exact constants for the threshold.

3.1 Information Theoretic Bound

We begin with the information theoretic lower bound for the localization accuracy.

Lemma 4 (Information Theoretic Lower Bound).

Consider the submatrix model (2) with Gaussian noise . For any fixed , there exist a universal constant such that if

(13)

any algorithm will fail to localize the submatrix with probability at least in the following minimax sense:

3.2 Combinatorial Search for Growing Number of Submatrices

Combinatorial search over all submatrices of size finds the location with the strongest aggregate signal and is statistically optimal (Butucea et al., 2013; Butucea and Ingster, 2013). Unfortunately, it requires computational complexity , which is exponential in . The search Algorithm 4 was introduced and analyzed under the Gaussian setting for a single submatrix in Butucea and Ingster (2013), which can be used iteratively to solve multiple submatrices localization.

Input: the data matrix.
Output: A subset of the row indexes and a subset of column indexes as the localization of the submatrix.
For all index subsets with and , calculate the sum of the entries in the submatrix . Report the index subset with the largest sum.
Algorithm 4 Combinatorial Search Algorithm

For the case of multiple submatrices, the submatrices can be extracted with the largest sum in a greedy fashion.

Lemma 5 below provides a theoretical guarantee for Algorithm 4 to achieve the information theoretic lower bound.

Lemma 5 (Guarantee for Search Algorithm).

Consider the non-overlapping multiple submatrices model (3) and iterative application of Algorithm 4 in a greedy fashion for times. Assume

for all and . There exists a universal constant such that if

then Algorithm 4 will succeed in returning the correct location of the submatrix with probability at least .

To complete Theorem 1, we include the following Theorem 5 capturing the statistical boundary. It is proved by exhibiting the information-theoretic lower bound Lemma 4 and analyzing Algorithm 4.

Theorem 5 (Statistical Boundary).

Consider the submatrix model (2). There exists a critical rate

for the signal to noise ratio, such that for any problem with , the statistical search Algorithm 4 will succeed in submatrix localization, i.e., , with high probability. On the other hand, if , no algorithm will work (in the minimax sense) with probability tending to .

4 Discussion

In this paper we established the computational and statistical boundaries for submatrix localization in the setting of a growing number of submatrices with subgaussian noise. The primary goals are to demonstrate the intrinsic gap between what is statistical possible and what is computationally feasible and to contrast the interplay between computational efficiency and statistical accuracy for localization with that for detection.

Submatrix Localization v.s. Detection

As pointed out in Section 1.4, for any , there is an intrinsic SNR gap between computational and statistical boundaries for submatrix localization. Unlike the submatrix detection problem where for the regime , there is no gap between what is computationally possible and what is statistical possible. The inevitable gap in submatrix localization is due to the combinatorial structure of the problem. This phenomenon is also seen in some network related problems, for instance, stochastic block models with a growing number of communities. Compared to the submatrix detection problem, the algorithm to solve the localization problem is more complicated and the techniques required for the analysis are much more involved.

Detection for Growing Number of Submatrices

The current paper solves localization of a growing number of submatrices. In comparison, for detection, the only known results are for the case of a single submatrix as considered in Butucea and Ingster (2013) for the statistical boundary and in Ma and Wu (2013) for the computational boundary. The detection problem in the setting of a growing number of submatrices is of significant interest. In particular, it is interesting to understand the computational and statistical trade-offs in such a setting. This will need further investigation.

Estimation of the Noise Level

Although Algorithms 1 and 3 do not require the noise level as an input, Algorithm 2 does require the knowledge of . The noise level can be estimated robustly. In the Gaussian case, a simple robust estimator of is the following median absolute deviation (MAD) estimator due to the fact that is sparse:

5 Proofs

We prove in this section the main results given in the paper. We first collect and prove a few important technical lemmas that will be used in the proofs of the main results.

5.1 Prerequisite Lemmas

We start with two Lemmas 6 and 7 that due to perturbation theory.

Lemma 6 (Stewart and Sun (1990) Theorem 4.1).

Suppose that

, all of which are matrices of the same size, and we have the following singular value decomposition

(14)

and

(15)

Let be the matrix of canonical angles between and , and let be the matrix of canonical angles between and (here denotes the linear space). Define

(16)
(17)

Then suppose there is a number such that

Then

Further, suppose there are numbers such that

then for -norm, or any unitarily invariant norm, we have

Let us use the above version of the perturbation bound to derive a lemma that is particularly useful in our case. Simple algebra tells us that

(18)
(19)
(20)
(21)
(22)

Similarly, we have

(23)
(24)
(25)

Thus, it holds that

and similarly we have (since the operator norm of a whole matrix is larger than that of the submatrix)

Thus the following version of the Wedin’s Theorem holds.

Lemma 7 (Davis-Kahan-Wedin’s Type Perturbation Bound).

It holds that

and also the following holds for -norm (or any unitary invariant norm)