A set of vectors has a common support of cardinality that is much smaller than the dimension of the vectors. It is easy to find this common support by simply looking at a single vector. But this will require measurements, one for checking each coordinate of the vector. As is now well-known, we can make do with random linear measurements on a single vector to recover the support . When multiple samples are available, one can try to estimate the support by considering each sample in isolation, but it requires measurements and ignores the fact that the samples share a common support. A natural question that arises then is whether we can still recover the unknown support with overall measurements (i.e., would suffice)? We examine this question in a natural Bayesian setting and answer it in the negative: when , we will need at least overall measurements. Thus, in sharp contrast with the regime where overall measurements suffice, a much larger number of overall measurements are necessary when .
We start with the simpler Gaussian setting, and discuss the more general subgaussian setting in later sections. Specifically, consider independent -dimensional samples where each is a zero-mean Gaussian vector with a diagonal covariance matrix diag(). We assume that the diagonal entry
, which represents the variance along theth coordinate, is either or , whereby the common support of the vectors coincides with the locations of s. The assumption that is binary can be relaxed, as we discuss in section V. We make linear measurements on the vectors using independent random Gaussian matrices with columns that have unit expected squared norms. The goal is to recover the common support using measurements , . We show that in the measurement-starved regime of with
, the minimum number of samples required to recover the support correctly with large probability is(assuming ).
The sample-optimal estimator we propose entails forming an estimate of and then obtaining the support by selecting the largest entries of . The estimate has a closed-form expression: it is simply the empirical average where denotes the th column of the th measurement matrix. This is in contrast to the standard Sparse Bayesian Learning (SBL) approach where the maximum likelihood (ML) estimate of is used, which can only be expressed as a nonconvex optimization problem and does not have a closed-form solution. Furthermore, the proposed estimator works under a much broader setting, one with subgaussian prior on and subgaussian measurement matrices, and with additive subgaussian noise. Note that under these minimal assumptions, the underlying statistical problem becomes nonparametric, and an ML estimate for cannot be expressed as a closed-form optimization problem. In Figure 1
, we plot the probability of exact support recovery for our proposed closed-form estimator as a function of the normalized number of samples. These curves confirm our theoretical results, and a clear phase transition can be seen. We discuss these numerical evaluations further in SectionIII-C.
Our information-theoretic lower bound is obtained by applying Fano’s method to a difficult case with supports of size differing in one entry. The main challenge in the proof is to characterize the reduction in the distances between the distributions corresponding to different supports due to linear measurements. We capture this by a quantity related to the spectrum of the Gram matrix of the random Gaussian measurement matrix.
Before describing the related literature, we make a distinction between support recovery and recovery of the data vectors. When , support recovery implies recovery of the data vectors also. Indeed, given the support, one can estimate the data vectors by solving a least squares problem restricted to the support. When , although support recovery is still possible as we show in this work, data recovery is no longer possible, since there are infinitely many solutions even after restricting to the support. Recovery of the support, rather than the data vectors, is important in several practical applications including spectrum sensing  and group testing .
Information-theoretically optimal support recovery in the single sample setting is well-understood; ,  and  were some of the first works to look at this problem. In particular,  shows that for a deterministic input vector, measurements are necessary and sufficient to exactly recover the support using a Gaussian measurement matrix, establishing that support recovery is impossible in the regime using a single sample. Following these works, several papers have extended results to the multiple sample setting. We would like to point out that the settings considered in these works vary in terms of whether the same measurement matrix is used for all samples or different measurement matrices are used across samples; whether the data vectors are deterministic or sampled from a generative model; and whether the measurements are noisy or noiseless. In our setting, as we mentioned, we consider generated from a certain generative model, random measurement matrices chosen independently across samples and independent of the data vectors, and measurements corrupted by subgaussian noise. We emphasize that both our upper and lower bounds are for random measurement matrix designs. There has also been work on approximate support recovery ,  in the single sample setting. In this work, however, our focus will be on exact recovery.
A setup similar to ours was studied in , but the results are not tight in the regime. In particular,  showed a lower bound on sample complexity of support recovery of roughly , much weaker than our lower bound. Another related line of works ,  studies this problem considering the same measurement matrix for all samples, under the assumption that the data vectors are deterministic. In , the authors connect the support recovery problem to communication over a single input multiple output MAC channel. However, the performance guarantees are asymptotic in nature ( and fixed). In , it is shown that using a Gaussian measurement matrix, the probability of error of the maximum likelihood decoder goes to zero with provided and . Also, several algorithms from the single sample setting have been generalized to work with multiple samples that include convex programming methods , , , thresholding-based methods , , Bayesian methods  and greedy methods , . However, none of the above works addresses the question of tradeoff between and when . Initial works considering the regime were  and , followed by  and , where it was empirically demonstrated that when multiple samples are available, it is possible to operate in the regime. However, the analysis in  is done under two fairly restrictive conditions. The first condition is an orthogonality assumption on the data vectors that requires to be diagonal. In fact, as we show in our analysis, a much weaker assumption is sufficient for randomly generated data vectors with independent coordinates, in which case the condition holds in expectation. The second condition is that , which is much stronger than our condition111This condition, too, can be avoided if we replace some union bounds in our proof with appropriate bounds for heavy tails obtained using a tail-splitting technique; we omit this more technical approach to keep our proofs conceptually simpler.
In , a LASSO-based approach is proposed to recover the common support using correlation among the . The authors empirically show that support recovery is possible using the same measurement matrix across samples (with large ) for support size and conjecture that can be as large as . In another recent work closely related to ours , the authors demonstrate the possibility of operating in the regime. Their results are for the same measurement matrix across samples and for drawn from a certain prior. Similar to , the authors analyze the exhaustive search ML decoder and show that i¡ts error probability decays exponentially with the number of samples
. The error exponent, however, is expressed in terms of the eigenvalues of certain matrices and its exact dependence on the parameters, and is not clear.
Our formulation of support recovery in a Bayesian setting naturally relates to some of the works on covariance estimation. A recent work which looks at the problem of covariance estimation from low-dimensional projections of the data is . No structural assumptions are made on the covariance matrix. As we will see in the next section, the support recovery problem amounts to estimating a diagonal and sparse covariance matrix, and the general results from [3, Corollary 3] for this specific case are loose and do not give the correct scaling for sample complexity. Two other works that study covariance estimation from projected samples are  and , focusing specifically on the case. In particular, [7, Theorem 4.1] assumes a low-rank plus identity structure (the spiked covariance model) on the data covariance matrix and shows that samples are sufficient for recovery via convex programming, where is the rank of the true covariance matrix. Similar results are obtained in , which also provides results for Toeplitz-structured covariance matrices. However, a direct application of these results to the diagonal sparse case does not give the correct scaling on the number of samples. Also, since is set to one, the tradeoff between and is not clear.
Our setting is also related to the recently considered inference under local information constraints setting of . We impose information constraints on each sample by allowing only linear measurements per sample. Roughly, our results say that when local information constraints are placed (namely, ), support recovery requires much more than overall measurements.
Organization. In the next section, we formally state the problem and the assumptions we make in our generative model setting. We then state our main result, which characterizes the sample complexity of support recovery. In section III, we propose and analyze a closed-form estimator and show it to be sample-optimal. We present simulation results, followed by the proof of the lower bound in section IV. We conclude with discussion and future work in section VI. Some of the technical details have been relegated to the appendix. Appendices A and B contain the proof of the main technical result needed for analyzing our scheme. Appendix C
contains a bound for the fourth moment of the minimum eigenvalue of a Wishart matrix, which is required for the lower bound and may be of independent interest. The background needed for our proofs is reviewed in AppendixD.
Notation and Preliminaries. For a matrix , denotes its th column, denotes its th entry and denotes the submatrix formed by columns indexed by . To denote the set of matrices (or vectors), we use the shorthand . Also, for a vector , denotes the th component of . For symmetric matrices and , the notation denotes that the matrix is positive semidefinite. For a vector , denotes a diagonal matrix with the entries of
on the diagonal. A random variableis subgaussian with variance parameter , denoted , if
for all . Similarly, a random variable is subexponential with parameters and , denoted , if
for all .
When taking expectation of a function of several random variables , we use to denote that the expectation is with respect to the distribution of .
Ii Problem formulation and main result
We start with the basic setting of Gaussian prior with noiseless measurements obtained using Gaussian sensing matrices. However, as we shall see later, our results generalize to much broader settings and extend to subgaussian priors on data and noisy subgaussian measurements.
In the basic setting, we consider a Bayesian formulation for support recovery where the input comprises independent samples in , with each
having a zero-mean Gaussian distribution. We denote the covariance ofby , where the -dimensional vector has entries such that . That is, the (random) data vectors have a common support of size .
Each is passed through a random measurement matrix , , with independent, zero-mean Gaussian entries with variance , and the observations are made available to a center. Using the measurements , the center seeks to determine the common support of .
To that end, the center uses an estimate , where denotes the set of all subsets of of cardinality . We seek estimators that can recover the support of accurately with probability of error no more than ,222Note that the value is chosen here for convenience and can be replaced with any acceptable value below . However, our results may not be tight in their dependence on . namely
In the compressed sensing literature, this is usually referred to as a non-uniform recovery guarantee.
We are interested in sample-efficient estimators. The next definition introduces the fundamental quantity of interest to us.
Definition 1 (Sample complexity of support recovery).
For , the sample complexity of support recovery is defined as the minimum number of samples for which we can find an estimator satisfying (3).
Our formulation assumes that the support size is known. That said, our proposed estimator extends easily to the setting where we only have an upper bound of on the support size, and we seek to output a set of indices containing the support.
Our main result is the following.
Theorem 1 (Characterization of sample complexity).
For and , the sample complexity of support recovery in the setting above is given by
Our proposed estimator and its analysis applies to a much broader setting involving subgaussian priors (see (1) for definition). For , we can use any prior with subgaussian distributed entries, i.e., the entries of are independent and zero-mean with for and , where is the variance parameter for the subgaussian random variable . Our analysis will go through as long as the variance and variance parameters differ only up to a constant factor.
Also, the measurement matrices can be chosen to have independent, zero-mean subgaussian distributed entries in place of Gaussian. However, as above, we assume that the variance and variance parameter of each entry are the same up to a multiplicative constant factor. Further, we assume that the fourth moment of the entries of is of the order of the square of the variance. Two important ensembles that satisfy these properties are the Gaussian ensemble and the Rademacher ensemble.
For clarity, we restate our assumptions below. These assumptions are required for the analysis of our estimator; the lower bound proof is done under the more restrictive setting of Gaussian measurement matrix ensemble (which implies a lower bound for the subgaussian ensemble also).
The entries of , , are independent and zero-mean with for and , where is an absolute constant;
The entries of , , are independent and zero-mean with , , and , where and are absolute constants.
Our results also extend to the case when the data vectors are not necessarily sparse in the standard basis for , i.e., the data vectors can be expressed as , , where is any orthonormal basis for and s have a common support of size . Under the same generative model as before, but for s this time, Theorem 1 continues to hold. This is because when is Gaussian, the effective measurement matrix also satisfies the properties we mentioned above, namely, it has independent mean zero Gaussian entries with variance and fourth moment .
We have restricted to binary vectors for the ease of presentation. Later, in Section V, we will show that our sample complexity results extend almost verbatim to a more general setting with the nonzero coordinates of taking values between and . The only change, in effect, is a factor blow-up in the sample complexity of support recovery.
Finally, we can allow noisy measurements where the noise has independent, zero-mean subgaussian entries independent of and , with variance parameter .
We present the upper bound for this more general setting, along with our proposed estimator, in the next section.
Iii The estimator and its analysis
We will work with the more general setting described in the previous section, that is with subgaussian random variables satisfying assumptions 1 and 2. In fact, for simplicity, we assume that and are subgaussian with variance parameter equal to their respective variances, a property known as strict subgaussianity. Also, for the measurement matrix, we work with the same parameters as those for the Gaussian ensemble and set
and assume that is subgaussian with variance parameter . These assumptions of equality can be relaxed to order equality up to multiplicative constants.
Iii-a The estimator
We now present our closed-form estimator for
. To build heuristics, consider the trivial case where we can directly access samples. Then, a natural estimate for is the sample variance. But in our setting, we only have access to the measurements . We compute the vector and treat it as a “proxy” for . When and the measurements are noiseless, this proxy will indeed coincide with . We compute the sample variances using these new proxy samples and use it to find an estimate for the support of .
Formally, we consider the estimate for the covariance matrix of s given by
Note that is positive semidefinite. We form an intermediate estimate for the variance vector using the diagonal entries of as follows:
where denotes the th column of . Since we are only interested in estimating the support, we simply declare indices corresponding to the largest entries of as the support, namely, we sort to get and output
where denotes the index of the th largest entry in . This is similar in spirit to the Iterative Hard Thresholding (IHT) algorithm  from the compressed sensing literature, where a similar support estimation step followed by least squares is used to estimate the data vectors. The difference is that IHT is an iterative procedure and the least squares step requires . Also note that evaluating requires steps, whereby the overall computational complexity of (naively) evaluating our proposed estimator is .
Before we move to detailed analysis in the next section, we do a quick sanity test for our estimator and evaluate its “expected behavior”. An easy calculation shows that is an estimate of with a constant bias depending only on and . In particular, we have the following result.
Recall that for . Since , we can rewrite the estimator as
Taking expectation, we note that for ,
where the second step uses the fact that has zero mean entries. A similar calculation shows that for ,
Using our assumption that the columns of have independent mean-zero entries with variance and fourth moment , it follows from Lemma D.4 that for ,
and for ,
Combining the two results above, we get
which establishes the lemma. ∎
We work with this biased and analyze its performance in the next section. Since the bias is the same across all coordinates, it does not affect sorting/thresholding based procedures. The key observation here is that the expected values of the coordinates of in the support of exceeds those outside the support, making it an appropriate statistic for support recovery.
Iii-B And its analysis
A high level overview of our analysis is as follows. We first note that, conditioned on the measurement matrices, the entries of are sums of independent, subexponential random variables (defined in (2)). If we can ensure that there is sufficient separation between the typical values of in the and cases, then we can distinguish between the two cases. We show that such a separation holds with high probability for our subgaussian measurement matrix ensemble.
We now present the performance of our estimator.
We note that the result above applies for all and all , and not only to our regime of interest . When and , we obtain the upper bound claimed in Theorem 1.
While computationally tractable, analyzing our proposed estimator directly may not be easy. Instead, we analyze an alternative thresholding-based estimator given by
We note that if , the largest entries of must coincide with the support of . Therefore,
where is the support of . Using this observation, it suffices to analyze the estimator in (7), which will be our focus below.
The proof of Theorem 2 entails a careful analysis of tails of and uses standard subgaussian and subexponential concentration bounds. To bound the error term in (8), we rely on the measurement matrix ensemble satisfying a certain separation condition; we denote this event by and describe it in detail shortly. Denoting by the support of and by the support of , we note that can be bounded as
We show that the first two terms in the equation above, involving probabilities conditioned on the event , can be made small. Also, for the subgaussian measurement ensemble, occurs with large probability, which in turn implies that the overall error can be made small.
Our approach involves deriving tail bounds for conditioned on the measurement matrices, and then choosing a threshold to obtain the desired bound for (9); we derive lower tail bounds for and upper tail bounds for . . The event mentioned above corresponds to measurement ensemble being such that we can find this threshold that allows us to separate these bounds.
Specifically, note that
where we used . We proceed by observing that conditioned on , is a sum of independent subexponential random variables. In particular, using basic properties of subexponential random variables and the connection between subgaussian and subexponential random variables (described in Lemmas D.1 and D.2 given in the appendix), we get that conditioned on the measurement matrices , the random variable is
where and are absolute constants and
Using standard tail bounds for subexponential random variables given in Lemma D.1 and denoting , we have for ,
and for ,
We can upper bound the sum of the first two terms in (9) by by showing that with large probability takes values for which we get each term above bounded by roughly . In particular, using a manipulation of the expression for exponents, each of the conditional probabilities above will be less than if satisfies the following condition for any and :
and a similar definition holds for . Thus, the sufficient condition in (10) can be rewritten as
Let denote the event that for all and , condition (III-B) is satisfied by the measurement matrix ensemble. We will show that for drawn from the subgaussian ensemble satisfying assumption 2, the event occurs with high probability. We establish this claim by showing that each term in (III-B) concentrates well around its expected value and roughly suffices to guarantee that the separation required in (III-B) holds with large probability. The following result, which we prove in Appendix A, shows that (III-B) holds with large probability for all pairs .
The separation condition (III-B) holds simultaneously for all pairs with probability at least if , where .
As long as the noise variance is sufficiently small, i.e., , our estimator is sample-optimal and achieves the same scaling as the lower bound that we prove in Section IV.
The separation condition (III-B) fails to hold for , regardless of which measurement ensemble is used. This is to be expected when , since from our lower bound for sample complexity, multiple samples are necessary in the regime.
Iii-C Simulation results
In this section, we numerically evaluate the performance of the closed-form estimator in (5). Our focus will be on exact support recovery and we will study the performance of our estimator over multiple trials. For our experiments we use measurement matrices that are independent across samples and have i.i.d. entries. To generate measurements, we first pick a support uniformly at random from all possible supports of size . Next, the data vectors are generated according to one of two methods. In the first method, the nonzero entries of the data have i.i.d. entries. In the second method, the nonzero entries are i.i.d Rademacher (i.e., -valued with equal probability). Both these distributions are subgaussian with variance parameters that are a constant multiple of the respective variances. We generate noiseless measurements according to the linear model described before. For a fixed value of , , and , we generate multiple instances of the problem and provide it as input to the estimator. For every instance, we declare success or failure depending on whether the support is exactly recovered or not and the success rate is the fraction of instances on which the recovery is successful. For our experiments, we performed trials for every set of parameters. We can see from Figure 1 in Section I that the experimental results closely agree with our predictions. Also, the constant of proportionality is small, roughly between and .
We also perform simulations for the case when the measurements are noisy. In particular, we consider noise vectors , and Gaussian distributed as described before. We plot the probability of exact support recovery against the normalized number of samples for four different noise values of the noise variance, while the other parameters are kept fixed at , , and . It can be seen from Figure 3 that the four curves overlap, indicating that the scaling of with respect to the noise variance is tight. Finally, Figure 3 shows the performance of MSBL , where we plot the probability of exact support recovery against the normalized number of samples (the normalization factor is from the lower bound established in the next section). It can be seen that the curves do not overlap, indicating that MSBL has a different scaling of with respect to the parameters than what is predicted in the lower bound.
Iv Lower bound
In this section, we prove the lower bound for sample complexity claimed in Theorem 1. We work with the Gaussian setting, where each has independent, zero-mean Gaussian entries with variance . Denote by the set and by , , the set obtained by replacing the element in with from . Let be distributed uniformly over the pairs . The unknown support is set to be ; the random variables and linear measurements are generated as before.
We consider the Bayesian hypothesis testing problem where we observe and seek to determine . Given any support estimator , we can use it to find an estimate for the support, which in turn will give an estimate for . Clearly, equals , which must be less than by our assumption. On the other hand, by Fano’s inequality, we get
where denotes the distribution of the measurements when the support of is (a proof for the second inequality can be found in [10, Theorem 21]). Note that with each having the same distribution which we denote by . Thus, .
Next, we bound . Denote by the submatrix of obtained by restricting to the columns in and by the Gram matrix of . Further, let and be the respective eigenvalues of and . Note that and hold with probability since .
Denoting by the conditional distribution of the measurement when the measurement matrix is fixed to , we get
where in the first inequality holds by Lemma D.5 and the second inequality holds since for all . Using convexity of the KL divergence, we can get
Note that the expression on the right does not depend on our choice of ; we fix . With an abuse of notation, we denote by the
th column of a random matrixwith independent distributed entries. Using the Cauchy-Schwarz inequality twice, we get
where in the second inequality we also used the fact that s and s are identically distributed. The Hoffman-Wielandt inequality333For normal matrices and with spectra and , there exists a permutation of such that . When and are p.s.d, the left-side is minimum when both sets of eigenvalues are arranged in increasing (or decreasing) order.  can be used to handle the second term on the right-side. In particular, we have where the right-side coincides with since . Using the triangle inequality for Frobenius norm and noting that equals for a vector , we get
Recall that and are independent distributed random vectors, and therefore is a chi-squared random variable with degrees of freedom. Using the expression for the fourth moment of a chi-squared random variable gives us
where is an absolute constant.