# Near-Optimal Adaptive Compressed Sensing

This paper proposes a simple adaptive sensing and group testing algorithm for sparse signal recovery. The algorithm, termed Compressive Adaptive Sense and Search (CASS), is shown to be near-optimal in that it succeeds at the lowest possible signal-to-noise-ratio (SNR) levels, improving on previous work in adaptive compressed sensing. Like traditional compressed sensing based on random non-adaptive design matrices, the CASS algorithm requires only k log n measurements to recover a k-sparse signal of dimension n. However, CASS succeeds at SNR levels that are a factor log n less than required by standard compressed sensing. From the point of view of constructing and implementing the sensing operation as well as computing the reconstruction, the proposed algorithm is substantially less computationally intensive than standard compressed sensing. CASS is also demonstrated to perform considerably better in practice through simulation. To the best of our knowledge, this is the first demonstration of an adaptive compressed sensing algorithm with near-optimal theoretical guarantees and excellent practical performance. This paper also shows that methods like compressed sensing, group testing, and pooling have an advantage beyond simply reducing the number of measurements or tests -- adaptive versions of such methods can also improve detection and estimation performance when compared to non-adaptive direct (uncompressed) sensing.

Comments

There are no comments yet.

## Authors

• 3 publications
• 17 publications
• ### Improved Algorithms for Adaptive Compressed Sensing

In the problem of adaptive compressed sensing, one wants to estimate an ...
04/25/2018 ∙ by Vasileios Nakos, et al. ∙ 0

read it

• ### Practical High-Throughput, Non-Adaptive and Noise-Robust SARS-CoV-2 Testing

We propose a compressed sensing-based testing approach with a practical ...
07/17/2020 ∙ by Hendrik Bernd Petersen, et al. ∙ 0

read it

• ### Neurally Augmented ALISTA

It is well-established that many iterative sparse reconstruction algorit...
10/05/2020 ∙ by Freya Behrens, et al. ∙ 0

read it

• ### A Robust Deep Unfolded Network for Sparse Signal Recovery from Noisy Binary Measurements

We propose a novel deep neural network, coined DeepFPC-ℓ_2, for solving ...
10/15/2020 ∙ by Y. Yang, et al. ∙ 0

read it

• ### Optimal variable selection and adaptive noisy Compressed Sensing

For high-dimensional linear regression model, we propose an algorithm of...
09/10/2018 ∙ by Mohamed Ndaoud, et al. ∙ 0

read it

• ### Improving spatial domain based image formation through compressed sensing

In this paper, we improve image reconstruction in a single-pixel scannin...
10/01/2020 ∙ by Gene Stoltz, et al. ∙ 0

read it

• ### Multi-echo Reconstruction from Partial K-space Scans via Adaptively Learnt Basis

In multi echo imaging, multiple T1/T2 weighted images of the same cross ...
12/11/2019 ∙ by Jyoti Maggu, et al. ∙ 0

read it

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Compressed sensing (CS) has had a tremendous impact on signal processing, machine learning, and statistics, fundamentally changing the way we think about sensing and data acquisition. Beyond the standard compressed sensing methods that gave birth to the field, the compressive framework naturally suggests an ability to make measurements in an on-line and adaptive manner. Adaptive sensing uses previously collected measurements to guide the design and selection of the next measurement in order to optimize the gain of new information.

There is now a reasonably complete understanding of the potential advantages of adaptive sensing over non-adaptive sensing, the main one being that adaptive sensing can reliably recover sparse signals at lower SNRs than non-adaptive sensing (where SNR is proportional to the squared amplitude of the weakest non-zero element). Roughly speaking, to recover a -sparse signal of length , standard (non-adaptive) sensing requires the SNR to grow like . Adaptive sensing, on the other hand, succeeds as long as the SNR scales like . This is a significant improvement, especially in high-dimensional regimes. In terms of the number of measurements, both standard compressed sensing and adaptive compressed sensing require about measurements.

This paper makes two main contributions in adaptive sensing. First, we propose a simple adaptive sensing algorithm, termed Compressive Adaptive Sense and Search (CASS) that is proved to be near-optimal in that it succeeds if the SNR scales like . From the point of view of constructing and implementing the sensing operation as well as computing the reconstruction, the CASS algorithm is comparatively less computationally intensive than standard compressed sensing. The algorithm could be easily realized in a number of existing compressive systems, including those based on digital micromirror devices [6, 7]. Second, CASS is demonstrated in simulation to perform considerably better than 1) compressed sensing based on random Gaussian sensing matrices and 2) non-adaptive direct sensing (which requires measurements). To the best of our knowledge, this is the first demonstration of an adaptive sensing algorithm with near-optimal theoretical guarantees and excellent practical performance.

The results presented in this paper answer the following important question. Can sensing systems that make adaptive measurements significantly outperform non-adaptive systems in practice? Perhaps not surprisingly, the answer is yes, contradictory to the theme of [8]. This is for two reasons; first, while the required SNR for success of adaptive procedures is at best a logarithmic factor in dimension smaller than non-adaptive procedures, this factor is significant for problems of even modest size. On test signals (natural and synthetic images, see Figs. 5 and 6) the CASS procedure consistently outperforms standard compressed sensing by 2-8 dB. Second, in terms of computational complexity, adaptive sensing algorithms such as CASS require little or no computation after the measurement stage of the procedure is completed; in general, non-adaptive algorithms involve computation and memory intense optimization routines after measurements are gathered.

This work also sheds light on another relevant question. Can compressive sensing systems with an SNR budget ever hope to outperform direct non-compressive systems which measure each element of an unknown vector directly? The answer to this question is somewhat surprising. For the task of support recovery, when the measurements are collected non-adaptively, compressive techniques are always less reliable than direct sensing with

measurements. However, adaptive compressed sensing with just measurements can be more reliable than non-adaptive direct sensing, provided the signal is sufficiently sparse. This means that methods like compressed sensing, group testing, and pooling may have an advantage beyond simply reducing the number of measurements – adaptive versions of such methods can also improve detection and estimation performance.

Lastly, while adaptive sensing results in improved performance, we mention a few limitations. Making adaptive measurements can necessitate more flexible measurement hardware, incurring a cost in hardware design. In some systems, adaptive measurements could be easily realized (for example, existing compressive systems based on digital micromirror devices [6, 7]). In other systems, however, hardware constraints may make adaptive sensing entirely impractical. The results in this paper assume adaptive measurements can be made and the additive Gaussian noise model is applicable; both requirements are made precise in the following section.

### I-a Problem Setup and Background

Sparse support recovery in compressed sensing refers to the following problem. Let be an unknown signal with non-zero entries. The unknown signal is measured through linear projections of the form:

 yi=⟨ai,x⟩+zii=1,...,m (1)

where are i.i.d. , and are sensing vectors with a total sensing energy constraint

 m∑i=1||ai||22≤M. (2)

The goal of the support recovery problem is to identify the locations of the non-zero entries of . Roughly speaking, the paramount result of compressed sensing states this is possible if 1) is greater than a constant times (which can be much less than ), and 2) the amplitude of the non-zero entries of , and the total sensing energy , aren’t too small.

This paper is concerned with relaxing the second requirement through adaptivity. If the sensing vectors are fixed prior to making any measurements (the non-adaptive setting), then a necessary condition for exact support recovery is that the smallest non-zero entry of be greater than a constant times [9, 10]. The factor of is best interpreted as the sensing energy per dimension, and is required to control the error rate across the dimensions. Standard compressed sensing approaches based on random sensing matrices are known to achieve this lower bound while requiring on the order of measurements [11].

In adaptive sensing, can be a function of . The sensing vectors are not fixed prior to making observations, but instead depend on previous measurements. In this adaptive scenario, a necessary condition for sparse recovery is that the smallest non-zero entry exceed a constant times [12]. To the best of our knowledge, no proposed algorithms achieve this lower bound while also using only order measurements.

A handful of adaptive sensing procedures have been proposed, some coming close to meeting the lower bound while requiring only order measurements. Most recently, in [5], the authors propose an algorithm that guarantees exact recovery provided the smallest non-zero entry exceeds an unspecified constant times , coming within a triply logarithmic factor of the lower bound. While this triply logarithmic factor is not of practical concern, removing the suboptimal dependence on is of theoretical interest. Reducing the leading constant, on the other hand, is of great practical concern.

footnotetext: We write as shorthand for .footnotetext: Necessary to control the expected symmetric set difference over a slightly larger class of problems. See Sec. III-C and [12, Proposition 4.2].footnotetext: Sufficient for exact support recovery with Alg. 1 if , or if ; sufficient to recover any fixed fraction of support for any , that satisfy condition. See Theorems 1 and 2.

Table I summarizes the necessary and sufficient conditions for exact support recovery in the compressive, direct, non-adaptive and adaptive settings. Direct or un-compressed refers to the setting where individual measurement are made of each component – specifically, each sensing vector is supported on only one index. Here, support recovery is a traditional detection problem, and the non-adaptive SNR requirement becomes apparent by considering the following. If we measure each element of

once using sensing vectors that form an identity matrix when stacked together, then

, and the requirement implies the smallest non-zero entry be greater than . It is well known that the maximum of i.i.d. Gaussians grows as ; the smallest signal must exceed this value.

Adaptive direct sensing has recently received a fair amount of attention in the context of sparse recovery [14, 15, 16, 17, 18, 19, 20, 21, 22], and is closely related to traditional work in sequential statistical analysis. In this non-compressive setting, the number of measurements must be at least on the order of the dimension – i.e, , as testing procedures must at a minimum measure each index once. In this setting, a number of works have shown that scaling of the SNR as is necessary and sufficient for exact recovery [14, 18].

To summarize, Table I highlights the potential gains from compressive and adaptive measurements. Allowing for compressive measurements can reduce the total number of measurements from to , but does not relax the SNR requirement. Allowing for adaptive measurements does not reduce the total number of measurements required; instead, it can reduce the required SNR scaling from to . Prior to this work, in the adaptive compressive case, it was unknown if scaling of the SNR was sufficient for support recovery.

### I-B Main Results and Contributions

The main theoretical contribution of this work is to complete Table I by introducing a simple adaptive compressed sensing procedure that 1) requires only order measurements, and 2) succeeds provided the minimum non-zero entry is greater than a constant times , showing this scaling is sufficient. To the best of our knowledge, this is the first demonstration of a procedure that is optimal in terms of dependence on SNR and dimension.

Specifically, we propose a procedure termed Compressive Adaptive Sense and Search (CASS). For recovery of non-negative signals, the procedure succeeds in exact support recovery with probability greater than

provided the minimum non-zero entry of is greater than

 √20nM(logk+log(8δ)),

requires exactly measurements, and has total sensing energy . This result implies that as grows large, if the minimum non-zero entry exceeds , the procedure succeeds in exact support recovery.

When the signal contains both positive and negative entries, the CASS procedure guarantees the following. On average, the procedure succeeds in recovery of at least a fraction of the non-zero components provided their magnitude is at least

 √20nM(logk+2log(8ϵ)).

In this setting the procedure requires less than measurements and has total sensing energy .

In addition to these theoretical developments, we present a series of numerical results that show CASS outperforms standard compressed sensing for problems of even modest size. The numerical experiments reinforce the theoretical results – as the dimension of the problem grows, performance of the CASS algorithm does not deteriorate, confirming the logarithmic dependence on dimension has been removed. We also show side-by-side comparisons between standard compressed sensing, direct sensing, and CASS on approximately sparse signals.

### I-C Prior Work

A number of adaptive compressed sensing procedures have been proposed (see [3, 23, 4, 24, 5, 25, 26] and references therein), some coming close to meeting the lower bound while requiring only order measurements.

Most closely related to the CASS procedure proposed here, from an algorithmic perspective, are the procedures of Iwen [3], Iwen and Tewfik [4, 25], and the Compressive Binary Search of Davenport and Arias-Castro [24]. Much like CASS, these procedures rely on repeated bisection of the support of the signal. The first proposal of a bisecting search procedure applied to the adaptive compressed sensing problem, to the best of our knowledge, was [3]. The theoretical guarantees of this work when translated to the setting studied here ensure exact support recovery provided the non-zero entries of are greater than a constant times (see [25, Theorem 2] and more succinctly, [4, Theorem 2]).

Most recently, in [5], the authors propose an algorithm termed Sequential Compressed Sensing. The procedure is based on random sensing vectors with sequentially masked support, allowing the procedure to focus sensing energy on the suspected non-zero components. The procedure guarantees exact recovery provided the smallest non-zero entry exceeds an unspecified constant times , coming within in a triply logarithmic factor of the theoretical bound lower bound.

While the bisection approach used by CASS is suggested in a number of other works, the distinguishing feature of the the procedure presented here is the allocation of the sensing energy. In the case of adaptive compressed sensing, roughly speaking, both procedures in [3], and later in [24], suggest an allocation of sensing energy across the bisecting steps of the procedure that ensure the probability a mistake is made at any step is the same. The CASS procedure instead uses less sensing energy for initial steps, and more sensing energy at later steps, reducing the the probability the procedure makes an error from step to step. This allocation of the sensing energy removes any sub-optimal dependence on dimension from the SNR requirement; the CASS procedure succeeds provided the minimum non-zero entry exceeds .

Compressed sensing can be viewed as a variant of non-adaptive group testing (see [27, 28] for a detailed discussion). In traditional group testing [29], binary valued measurements indicate if a group of items contains one more defectives, distinct from the real valued measurement model in (1). While the measurement models are different, the CASS procedure attempts to emulate adaptive group testing by forming partial sums of the sparse signal in an adaptive, sequential manner. In the traditional group testing literature, CASS is perhaps most similar to the generalized binary splitting procedure of [30]. Like generalized binary splitting, CASS first isolates defective items by measuring groups of size approximately . The procedures deviate beyond this; the main distinguishing feature of CASS is the allocation of the sensing energy across passes required to cope with additive Gaussian noise. To the best of our knowledge, analogous allocations of have not been proposed to cope with noise in adaptive group testing as noisy group testing has remained largely unexplored [27]. This highlights another contribution of CASS – although we focus on the compressed sensing framework, the key idea (the allocation of the sensing energy) could be adapted to other measurement models, such as a binary symmetric noise model in group testing. In that case, CASS would suggest an allocation of measurements across stages to compensate for a fixed probability of error per measurement.

### I-D Notation

Notation adopted throughout, in general, follows convention. Bold face letters, , denote vectors (both random and deterministic), matrices are denoted with capital letters, for example, , while calligraphic font, such as , denotes a set or an event. The indicator function is equal to one if the event occurs, and equals zero otherwise. The vector , for any support set , is a vector of ones over , and elsewhere. Expectation is denoted and the probability of an event .

## Ii Problem Details

Consider the canonical compressed sensing setup in which a sparse signal supported on , , is measured through linear projections as in (1). In matrix-vector form,

 y=Ax+z

where and is the sensing matrix with total sensing energy constraint

 ||A||2fro≤M. (3)

The total sensing energy constraint reflects a limit on the total amount of energy or time available to the sensing device, as in [5, 24]. Importantly, it results in a decoupling between the energy constraint and the number of measurements, allowing for fair comparison between procedures that take a different number of measurements. In adaptive sensing the rows of , denoted , are designed sequentially and adaptively based on prior observations, , as opposed to the non-adaptive setting in which , are fixed a-priori.

The error metrics considered are the probability of exact support recovery, or the family wise error rate,

 P(ˆS≠S)

and the symmetric set difference

 d(ˆS,S)=∣∣{S∖ˆS}∪{ˆS∖S}∣∣

where is an estimate of . Both depend on the procedure, , , and the amplitude of the non-zero entries of . In particular, we quantify performance in terms of the minimum amplitude of the non-zero elements of , given as

 xmin=mini∈S|xi|.

The the SNR is defined as the amplitude of the smalled non-zero entry squared, scaled by the sensing energy per dimension:

 SNR=x2minMn.

In general, no assumption is made on the support set other than . Theorem 2 requires the sparse support be chosen uniformly at random over all possible cardinality support sets, which can be avoided by randomly permuting the support before running the algorithm.

Lastly, throughout the paper we assume that both and are powers of two for simplicity of presentation. This can of course be relaxed, at the cost of increased constants.

## Iii Compressive Adaptive Sense and Search

Conceptually, the CASS procedure operates by initially dividing the signal into a number of partitions, and then using compressive measurements to test for the presence of one or more non-zero elements in each partition. The procedure continues its search by bisecting the most promising partitions, with the goal of returning the largest components of the vector. This compressive sensing and bisecting search approach inspired the name of the algorithm.

### Iii-a The CASS Procedure

The CASS procedure is detailed in Alg. 1. The procedure requires three inputs: 1) , the number of non-zero coefficients to be returned by the procedure, 2) , the total budget, and 3) and an initial scale parameter , which is in most practical scenarios is set to .

Since the procedure is based on repeated bisections of the signal, it is helpful to define the dyadic subintervals of . The dyadic subintervals, , are given by

 Jj,ℓ = {(ℓ−1)n2j+1,…,ℓn2j}, j=0,1,…,log2n,ℓ=1,…,2j

where indicates the scale and the location of the dyadic partition.

The procedure consists of a series of steps. Conceptually, on the first step, , the procedure begins by dividing the signal into partitions, where is the smaller of or the next power of two greater than . These initial partitions are given by the dyadic sub-intervals of at scale :

 Jlog2(ℓ0),ℓℓ=1,…,ℓ0.

The procedure measures the signal with sensing vectors, each with support over a single dyadic sub-interval. The procedure then selects the largest measurements in absolute value. These measurements are used to define the support of the sensing vectors used on step two of the procedure. More specifically, the supports of the sensing vectors corresponding to the largest measurements are bisected, giving support sets. These support sets define the support of the sensing vectors on step .

The procedure continues in the fashion, taking measurements on each step (after the initial step, which requires measurements) and bisecting the support of the largest measurements to define the support of the sensing vectors on the next step. On the final step, , where , the support of the sensing vectors consists of a single index; the procedure returns the support of the sensing vectors corresponding to the largest measurements as the estimate of and terminates. Additionally, estimates of the values of the non-zero coefficients are returned based on the measurements made on the last step.

One of the key aspects of the CASS algorithm is the allocation of the measurement budget across the steps of the procedure. On step , the amplitude of the non-zero entries of the sensing vectors is given as

 √Msγn (4)

where is an internal parameter of the procedure used to ensure the total sensing energy constraint in (3) is satisfied with equality. Specifically,

 γ=1+4kℓ0s0∑s=2s2−s.

This particular allocation of the sensing energy gives the CASS procedure its optimal theoretical guarantees. While the amplitude of each sensing vector grows in a polynomial manner with each step, the support of the sensing vectors decreases geometrically; this results in a decrease in the sensing energy across steps, but an increase in reliability. In essence, the allocation exploits the fact that an increase in reliability comes at a smaller cost for steps later in the procedure.

Details of the CASS procedure are found in Alg. 1, and performance is quantified in the following section.

### Iii-B Theoretical Guarantees

The theoretical guarantees of CASS are presented in Theorems 1 and 2. Theorem 1 presents theoretical guarantees for non-negative signals, while Theorem 2 quantifies performance when the signal is both positive and negative.

###### Theorem 1.

Assume for all . Set . For , Alg. 1 has provided

 xmin≥√20nM(logk+log(8δ)),

has total sensing energy and uses measurements.

###### Proof.

See Appendix A. ∎

The theoretical guaranties of Theorem 1 do not apply to recovery of signals with positive and negative entries, as scenarios can arise where two non-zero components can cancel when measured by the same sensing vector. In order to avoid this effect, the initial scale parameter, , can be set to a value less than one. Doing so increases the number of partitions that are created on the first step, but reduces the occurrence of positive and negative signal cancellation. Theorem 2 bounds the expected fraction of the components that are recovered when the signal contains positive and negative entries.

###### Theorem 2.

Assume is chosen uniformly at random. For any Alg. 1 has provided

 xmin≥√20nM(logk+2log(8ϵ)),

has total sensing energy and has

 m≤8kϵ+2klog2(nk)

measurements. Moreover, if is a power of two, the procedure uses exactly measurements.

###### Proof.

See Appendix B. ∎

### Iii-C Discussion

Theorem 2 implies conditions under which a fraction of the support can be recovered in expectation. Provided , the procedure requires order measurements. In scenarios where one wishes to control , can be set to satisfy . In this case, while the SNR requirement remains essentially unchanged, the procedure would require on the order of measurements. In this sense, the compressive case with positive and negative non-zero values is more difficult, a phenomena noted in other work [31]. Note that when , the theoretical guarantee of Theorem 2 adapted to control the family wise error rate requires more than order measurements.

The minimax lower bound of [12, Proposition 4.2] states the following (over a slightly larger class of signals with sparsity , , and , with ): if then necessarily

 xmin≥√2nM(logk+log12ϵ−1).

Since the condition in Theorem 1 implies (which is readily shown as the procedure always has , and has when ) the upper and lower bounds can be compared directly by setting . In doing so, Theorem 1 implies that CASS succeeds with provided

 xmin≥√20nM(2logk+log16ϵ).

In the same manner, Theorem 2 can be directly compared. Doing so shows the theorems are tight to within constant factors.

The splitting or bisecting approach in CASS is a common technique for a number search problems, including adaptive sensing. As discussed in Sec. I-C, to the best of our knowledge, the first work to propose a bisecting search for the compressed sensing problem was [3]. When translated to the framework studied here, the authors in essence allocate the sensing vectors on each bisecting step to be proportional to , as opposed to increasing with the index of the step as in (4). Likewise, the compressive binary search (CBS) algorithm [24] also employs sensing vectors that are proportional to . The CBS procedure finds a single non-zero entry with vanishing probability of error as gets large provided , where is the amplitude of the non-zero entry. The authors of [24] rightly question whether the term is needed. Thm. 1 answers this question, removing the doubly logarithmic term and coming within a constant factor of the lower bound in [24]. A more direct comparison to the work in [24] is found in [32].

Increasing the number of partitions to further isolate sparse entries is an idea suggested in [3, 25]. In particular, the authors suggest a procedure that creates a large random partitions based on primes, essentially decreasing the support of the sensing vector such that each non-zero element is isolated with arbitrarily high probability.

## Iv Numerical Experiments

This section presents a series of numerical experiments aimed at highlighting the performance gains of adaptive sensing. In all experiments, the CASS procedure was implemented in the sparsifying basis according to Alg. 1 with inputs as specified. The signal to noise ratio is defined as

 SNR (dB) =10log10⎛⎝x2(k)Mn⎞⎠

where is the amplitude of the th largest element of . This definition reflects the SNR of measuring the th largest element using a fraction

of the total sensing energy. In all experiments, Gaussian noise of unit variance is added to each measurement, as specified in (

1).

Fig. 1 shows empirical performance of CASS for 1-sparse recovery, and, for comparison, 1) traditional compressed sensing with orthogonal matching pursuit (OMP) using a Gaussian ensemble with normalized columns and 2) direct sensing. As the dimension of the problem is increased, notice that performance of CASS remains constant, while the error probability of both OMP and direct sensing increase. Note for non-adaptive methods, in the one sparse case OMP is equivalent to the maximum likelihood support estimator, and is thus optimal amongst non-adaptive recovery algorithms. The total sensing energy is the same in all cases, allowing for fair comparison. For sufficiently large , CASS outperforms direct sensing.

Figs. 2 and 3 show performance of CASS for -sparse signals with equal magnitude positive and negative non-zero entries. Performance is in terms of empirical value of averaged over the trials. The support of the test signals was chosen uniformly at random from the set of -sparse signals, and the non-zero entries were assigned an amplitude of or with equal probability. The cancellation effect results in an error floor of around ; this is greatly reduced when the scale parameter is set to , clearly visible in both Fig. 2 and Fig. 3. The procedure is compared against traditional compressed sensing using a random Gaussian ensemble, recovered with LASSO and a regularizer tuned to return a -sparse signal (using SpaRSA, [33]). Performance of traditional compressed sensing with , fairly compared against CASS with , is shown in both figures. Traditional compressed sensing with is also shown, which is fairly compared to CASS with .

In Fig. 3 notice that performance of CASS remains constant as becomes large. For sufficiently large , CASS outperforms direct sensing, highlighting a major advantage of adaptive sensing: for support recovery, while standard compressed sensing does not outperform direct sensing in terms of dependence on SNR, for sufficiently sparse problems, CASS does. As solving LASSO with a dense sensing matrix for large problem sizes becomes computationally prohibitive, performance was only evaluated up to . For all dimensions evaluated, compressed sensing using LASSO was inferior to CASS and direct sensing.

Fig. 4 shows results of running CASS on the 20 image classes in the Microsoft Research Object Class Recognition database [35]. Each class consists of approximately 30 images of size . Results were averaged over all images in a particular class. Direct sensing, standard compressed sensing using a Gaussian Ensemble and the group LASSO (G-LASSO) of [34, 36] which exploits signal structure, are included. G-LASSO is known to be one of the best non-adaptive methods for recovery of natural images, making it a natural benchmark for comparison (see [37, Fig. 5]). The total sensing energy was the same in all cases, and all methods used measurements (except direct sensing which requires measurements). Each algorithm was tuned to output a coefficient approximation. CASS universally outperforms both G-LASSO and standard CS on all image sets and SNRs evaluated.

Fig. 5 shows recovery of a test image from [38] for three SNRs. The image is approximately sparse in the Daubechies wavelet domain (‘db4’ in MATLAB). Alg. 1 was run with as input and compared against LASSO with a random Gaussian ensemble, and the regularizer was tuned to return elements (again using SpaRSA, [33]). Using traditional compressed sensing, at an SNR of dB, the reconstructed image is essentially un-recognizable; at the same SNR, the image recovered with CASS is still very much recognizable. The peak-signal-to-noise ratio (PSNR) for images on is defined as

 PSNR (dB) =−10log10(||x−ˆx||22n)

where is an estimate of the image.

The experiments of Fig. 5 were repeated 10 times, and the empirical results averaged over the 10 trials are shown in Table II. In terms of PSNR, CASS outpeforms compressed sensing by 4-8 dB. The runtime of the simulation, which consists of the entire simulation time, including both measurement and recovery, is included. While the compressed sensing experiments were not optimized for speed, it seems unlikely any recovery method using dense Gaussian matrices would approach the runtime of CASS – standard compressed sensing runtime was on the order of a minute, while runtime of CASS was approximately seconds.

Fig. 6 shows side-by-side performance of CASS, traditional compressed sensing with a dense Gaussian sensing matrix and LASSO tuned to return non-zero components, and direct sensing on a natural image. In terms of PSNR, CASS outperforms compressed sensing by 2-4 dB. In terms of symmetric set difference, CASS substantially outperforms compressed sensing on all SNRs evaluated.

### Iv-a Discussion

The leading constant in the upper bound – a factor of 20 required by the CASS procedure – is significant in practice. Roughly speaking, traditional direct sensing requires , while CASS requires . One would then conclude that anytime the signal is sufficiently sparse, specifically, if , the CASS procedure should outperform direct sensing. Notice that in simulation CASS outperforms direct sensing on signals where , implying the leading constant of 20 is loose.

## V Conclusion

This work presented an adaptive compressed sensing and group testing procedure for recovery of sparse signals in additive Gaussian noise, termed CASS. The procedure was shown to be near-optimal in that it comes within a constant factor of the best possible dependence on SNR, while requiring on the order of measurements. Simulation showed that the theoretical developments hold in practice, including scenarios in which the signals is only approximately sparse.

While many of the questions regarding the gains of adaptivity for sparse support recovery have been settled, small questions remain. First, it would be of practical interest to further reduce the leading constant in the SNR requirement of CASS. The question of whether or not exact support recovery is possible in full generality (positive and negative non-zero entries, any level of sparsity), with SNR scaling as , and the number of measurements scaling as , is still outstanding. Other questions, such as the applicability of CASS to signals which are sparse in overcomplete dictionaries, are also interesting future research directions.

## Appendix A

Proof of Theorem 1: The total number of measurements is given as

 m = ℓ0+2k(s0−1)=2klog2(nk)

since the first step requires measurements, and the remaining steps each require measurements. The equality follows as . The constraint in (3) is satisfied:

 ||A||2fro = ℓ0∑ℓ=1||a1,ℓ||2+s0∑s=22k∑ℓ=1||as,ℓ||2 (5) = ℓ0(nℓ0)(Mγn)+2ks0∑s=2(nℓ02s−1)(Msγn) = Mγ(1+4kℓ0s0∑s=2s2−s)=M

where the second equality follows as each sensing vector has support and amplitude on step .

A sufficient conditions for exact recovery of the support set is that the procedure never incorrectly eliminate a measurement corresponding to a non-zero entry on any of the steps. Let be measurements corresponding to sensing vectors with one or more non-zero entries in their support, and be measurements corresponding to the zero entries on step . Consider a series of thresholds denoted , . If are all greater than , and are all less then , for all , the sufficient condition is implied. Taking the complement of this event, and from a union bound,

 P(ˆS≠S) ≤ P(t1⋃j=1{|y1,j|≤τ1}∪ℓ0⋃j=t1+1{|y1,j|≥τ1}) +s0∑s=2P(ts⋃j=1{|ys,j|≤τs}∪2k⋃j=ts+1{|ys,j|≥τs})

for any series of thresholds . As all non-zero elements are positive by assumption, and the signals combine constructively, it is straightforward to see

are normally distributed with mean greater than

and unit variance. On the other hand, are normally distributed with mean zero and unit variance. Setting the thresholds at , where is the amplitude of the non-zero entries of the sensing vector on step , gives the series of equations in (6) - (8).

Here, (6) follows from the union bound. Equation (7) follows from a standard Gaussian tail bound: for , where is the Gaussian cumulative density function. Lastly, (8) follows as is always less than :

 γ=1+4k/ℓ0s0∑s=2s2−s≤52 (9)

as the sum is arithmetico-geometric, and . Setting

 xmin ≥ √20nM(logk+log(8δ))

in (8) gives , completing the proof.

## Appendix B

Proof of Theorem 2.

First, we confirm that the sensing budget is satisfied. In the same manner as before

 ||A||2fro = ℓ0∑ℓ=1||a1,ℓ||2+s0∑s=22k∑ℓ=1||as,ℓ||2 = Mγ(1+4kℓ0s0∑s=2s2−s)=M.

The total number of measurement satisfies

 m = ℓ0+2k(s0−1)≤8k/ϵ+2klog2(n/k)

where the inequality follows as .

We proceed by bounding the expected number of non-zero components that are isolated when the signal is partitioned, and then show these indices are returned in with high probability. We assume the support of the signal is chosen uniformly at random from all possible -sparse supports sets (note this is equivalent to randomly permuting the labels of all indices). Conceptually, the algorithm divides the signal support into disjoint support sets, where . Let the set index the disjoint support sets that contain exactly one index corresponding to a non-zero entry:

 I:={ℓ∈{1,…,ℓ0}:|Jlog2ℓ0,ℓ∩S|=1}.

Then

 E[|I|] = E[ℓ0∑ℓ=1I{ℓ∈I}]=ℓ0∑ℓ=1E[I{ℓ∈I}] = = k(ℓ0−1ℓ0)k−1≥kℓ0−kℓ0

where is the indicator function. Since we have

 E[|I|]≥k(1−ϵ/4). (10)

Let be the event that the procedure fails to find at least as many true non-zero elements as the number that were isolated on the first partitioning, specifically,

 E={|ˆS∩S|<|I|}.

Additionally, notice that since the procedure returns exactly indices,

 Ec⇒d(ˆS,S)≤2(k−|I|). (11)

We can bound the probability of the event in a fashion similar to the previous proof. Consider a threshold at step given as . Assume that are measurements corresponding to isolated components, correspond to measurements that contain more than one non-zero index, and correspond to noise only measurements. If all noise only measurements () are below the threshold in absolute value, and all isolated non-zero components are above the threshold (), for all , this implies does not occur, regardless of . As before, after applying a union bound,

 P(E) ≤t′1∑j=1P(|y1,j|≤τ1)+ℓ0∑j=t1+1P(|y1,j|≥τ1) + s0∑s=2⎛⎝t′s∑j=1P(|ys,j|≤τs)+2k∑j=ts+1P(|ys,j|≥τs)⎞⎠ ≤ ℓ0exp(−x2mina218)+s0∑s=22kexp(−x2mina2s8) ≤ s0∑s=1exp(−x2mina2s8+log(8kϵ))

where the inequality follows from a standard Gaussian tail bound, and the last line follows as . Setting gives

 P(E) ≤ s0∑s=1exp(−x2minMs8γn+log(8kϵ)) ≤

since, from (9), . Next, imposing the condition of the Theorem,

 xmin ≥ √20nM(logk+2log(8ϵ)) = √20nM(log(8ϵ)+log(8kϵ)),

then

 P(E) ≤ s0∑s=1(8ϵ)−s≤ϵ/4. (12)

By combining (10), (11), and (12), since ,

 E[d(ˆS,S)] = P(E)E[d(ˆS,S)|E]+P(Ec)E[d(ˆS,S)|Ec] ≤ kϵ,

completing the proof.

## References

• [1] M. L. Malloy and R. D. Nowak, “Near-optimal adaptive compressed sensing,” in Signals, Systems and Computers (ASILOMAR), 2012 Conference Record of the Forty Sixth Asilomar Conference on.   IEEE, 2012, pp. 1124–1130.
• [2] J. Haupt, R. Nowak, and R. Castro, “Adaptive sensing for sparse signal recovery,” in Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop, 2009. DSP/SPE 2009. IEEE 13th.   IEEE, 2009, pp. 702–707.
• [3] M. Iwen, “Group testing strategies for recovery of sparse signals in noise,” in Signals, Systems and Computers, 2009 Conference Record of the Forty-Third Asilomar Conference on.   IEEE, 2009, pp. 1561–1565.
• [4] M. Iwen and A. Tewfik, “Adaptive compressed sensing for sparse signals in noise,” in Signals, Systems and Computers (ASILOMAR), 2011 Conference Record of the Forty Fifth Asilomar Conference on.   IEEE, 2011.
• [5] J. Haupt, R. Baraniuk, R. Castro, and R. Nowak, “Sequentially designed compressed sensing,” in Statistical Signal Processing Workshop (SSP), 2012 IEEE.   IEEE, 2012, pp. 401–404.
• [6] M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling,” Signal Processing Magazine, IEEE, vol. 25, no. 2, pp. 83–91, 2008.
• [7] V. Studer, J. Bobin, M. Chahid, H. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proceedings of the National Academy of Sciences, vol. 109, no. 26, pp. E1679–E1687, 2012.
• [8] E. Arias-Castro, E. J. Candes, and M. Davenport, “On the Fundamental Limits of Adaptive Sensing,” ArXiv e-prints, Nov. 2011.
• [9] M. Wainwright, “Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting,” Information Theory, IEEE Transactions on, vol. 55, no. 12, pp. 5728 –5741, dec. 2009.
• [10] S. Aeron, V. Saligrama, and M. Zhao, “Information theoretic bounds for compressed sensing,” Information Theory, IEEE Transactions on, vol. 56, no. 10, pp. 5111 –5130, oct. 2010.
• [11] M. J. Wainwright, “Sharp thresholds for high-dimensional and noisy recovery of sparsity,” arXiv preprint math/0605740, 2006.
• [12] R. M. Castro, “Adaptive Sensing Performance Lower Bounds for Sparse Signal Estimation and Testing,” ArXiv e-prints, Jun. 2012.
• [13] M. L. Malloy and R. D. Nowak, “On the limits of sequential testing in high dimensions,” in Signals, Systems and Computers (ASILOMAR), 2011 Conference Record of the Forty Fifth Asilomar Conference on.   IEEE, 2011, pp. 1245–1249.
• [14] M. L. Malloy and R. Nowak, “Sequential testing for sparse recovery,” arXiv preprint arXiv:1212.1801, 2012.
• [15] D. Wei and A. O. Hero, “Multistage adaptive estimation of sparse signals,” in Statistical Signal Processing Workshop (SSP), 2012 IEEE.   IEEE, 2012, pp. 153–156.
• [16] G. Newstadt, E. Bashan, and A. Hero, “Adaptive search for sparse targets with informative priors,” in Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on.   IEEE, 2010, pp. 3542–3545.
• [17] J. Haupt, R. M. Castro, and R. Nowak, “Distilled sensing: Adaptive sampling for sparse detection and estimation,” Information Theory, IEEE Transactions on, vol. 57, no. 9, pp. 6222–6235, 2011.
• [18] M. L. Malloy and R. D. Nowak, “Sequential analysis in high-dimensional multiple testing and sparse recovery,” in Information Theory Proceedings (ISIT), 2011 IEEE International Symposium on.   IEEE, 2011.
• [19] M. Malloy, G. Tang, and R. Nowak, “The sample complexity of search over multiple populations,” Information Theory, IEEE Transactions on, vol. 59, no. 8, pp. 5039–5050, Aug 2013.
• [20] E. Price and D. P. Woodruff, “Lower bounds for adaptive sparse recovery,” arXiv preprint arXiv:1205.3518, 2012.
• [21] K. Jamieson, M. Malloy, R. Nowak, and S. Bubeck, “On finding the largest mean among many,” in Signals, Systems and Computers (ASILOMAR), 2013 Conference Record of the Forty Seventh Asilomar Conference on.   IEEE, 2013.
• [22] ——, “lil’UCB: An optimal exploration algorithm for multi-armed bandits,” arXiv preprint arXiv:1312.7308, 2013.
• [23] S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” Signal Processing, IEEE Transactions on, vol. 56, no. 6, pp. 2346–2356, 2008.
• [24] M. A. Davenport and E. Arias-Castro, “Compressive binary search,” ArXiv e-prints, Feb. 2012. [Online]. Available: http://arxiv.org/abs/1202.0937
• [25] M. Iwen and A. Tewfik, “Adaptive strategies for target detection and localization in noisy environments,” Signal Processing, IEEE Transactions on, vol. 60, no. 5, pp. 2344–2353, 2012.
• [26] J. Haupt, R. Baraniuk, R. Castro, and R. Nowak, “Compressive distilled sensing: Sparse recovery using adaptivity in compressive measurements,” in Signals, Systems and Computers, 2009 Conference Record of the Forty-Third Asilomar Conference on, nov. 2009, pp. 1551 –1555.
• [27] G. K. Atia and V. Saligrama, “Boolean compressed sensing and noisy group testing,” Information Theory, IEEE Transactions on, vol. 58, no. 3, pp. 1880–1901, 2012.
• [28] A. Gilbert, M. Iwen, and M. Strauss, “Group testing and sparse signal recovery,” in Signals, Systems and Computers, 2008 42nd Asilomar Conference on.   IEEE, 2008, pp. 1059–1063.
• [29] D. Du and F. Hwang, Combinatorial group testing and its applications.   World Scientific, 1993.
• [30] F. Hwang, “A method for detecting all defective members in a population by group testing,” Journal of the American Statistical Association, vol. 67, no. 339, pp. 605–608, 1972.
• [31] E. Arias-Castro, “Detecting a vector based on linear measurements,” Electronic Journal of Statistics, vol. 6, pp. 547–558, 2012.
• [32] M. L. Malloy and R. D. Nowak, “Near-Optimal Compressive Binary Search,” ArXiv e-prints, Mar. 2012. [Online]. Available: http://arxiv.org/abs/1203.1804
• [33] S. J. Wright, R. D. Nowak, and M. A. Figueiredo, “Sparse reconstruction by separable approximation,” Signal Processing, IEEE Transactions on, vol. 57, no. 7, pp. 2479–2493, 2009.
• [34] N. S. Rao, R. D. Nowak, S. J. Wright, and N. G. Kingsbury, “Convex approaches to model wavelet sparsity patterns,” in Image Processing (ICIP), 2011 18th IEEE International Conference on.   IEEE, 2011, pp. 1917–1920.
• [35] “Microsoft Research Cambridge Object Recognition Image Database, http://research.microsoft.com/en-us/projects/objectclassrecognition/.
• [36] L. Jacob, G. Obozinski, and J.-P. Vert, “Group lasso with overlap and graph lasso,” in Proceedings of the 26th Annual International Conference on Machine Learning.   ACM, 2009, pp. 433–440.
• [37] N. S. Rao and R. D. Nowak, “Correlated Gaussian designs for compressive imaging,” in Image Processing (ICIP), 2012 19th IEEE International Conference on.   IEEE, 2012, pp. 921–924.
• [38] A. Frank and A. Asuncion, “UCI machine learning repository,” 2010. [Online]. Available: http://archive.ics.uci.edu/ml