Average performance of Orthogonal Matching Pursuit (OMP) for sparse approximation

09/18/2018 ∙ by Karin Schnass, et al. ∙ Leopold Franzens Universität Innsbruck 0

We present a theoretical analysis of the average performance of OMP for sparse approximation. For signals, that are generated from a dictionary with K atoms and coherence μ and coefficients corresponding to a geometric sequence with parameter α, we show that OMP is successful with high probability as long as the sparsity level S scales as Sμ^2 K ≲ 1-α . This improves by an order of magnitude over worst case results and shows that OMP and its famous competitor Basis Pursuit outperform each other depending on the setting.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In sparse approximation the goal is to approximate a given signal by a linear combination of a small number of elements , called atoms, out of a given larger set, such as basis or a frame, called the dictionary. Storing the normalised atoms as columns in the dictionary matrix and denoting the restriction to the columns indexed by a set by we can write informally,

(1)

Finding the smallest error for a given sparsity level and the corresponding support set , which determines via , where is the Moore-Penrose pseudo inverse, becomes an NP-hard problem in general unless the dictionary is an orthonormal system. In this case thresholding, meaning choosing as the indices of the atoms having the -largest inner products with the signal in magnitude, will succeed. For all other cases, one had to find algorithms which are more efficient, if less optimal than an exhaustive search through all possible supports sets with subsequent projection . The two most investigated directions are greedy methods and convex relaxation techniques - the two golden classics being Orthogonal Matching Pursuit (OMP), [13], and Basis Pursuit (BP), [3], respectively.
OMP finds the support iteratively, adding the index of the atom which has the largest absolute inner product with the residual and updating the residual. So initialising , it

until a stopping criterion is met, such as reaching the desired number of iterations or size of the residual/inner product is sufficiently small.
The Basis Pursuit principle, on the other hand, prescribes finding the minimiser of the convex programme

(2)

and choosing as the index set of the -largest entries of in magnitude. The interesting question concerning both schemes is when they are successful, meaning assuming that the signal is known to be S-sparse, meaning with , when can they recover the support . It was first studied in [15, 7] and for dictionaries with coherence a sufficient condition for both schemes to succeed is that , which is relaxed in comparison to the sufficient condition for thresholding , but still quite restrictive, especially considering the much better performance in practice. This led to the investigation of the average performance when modelling signals as

(3)

where is a Rademacher sequence, the coefficient sequence is non-increasing, , and for and is some permutation such that the support satisfies , where for a matrix the transpose is denoted by .
It was shown that BP recovers the true support except with probability as long as ,[16]111The theorem actually considers Steinhaus instead of Rademacher sequences. However, the proof for Rademacher sequences is exactly the same; simply use Hoeffding’s inequality instead of the complex Bernstein inequality. Also beware the buggy support condition in model M1., and that thresholding succeeds except with probability as long as , [14]222The improved constant presented here is due to the fact that also for Rademacher sequences .. The fact that for OMP a similar result could only be found in a multi-signal scenario, [9], started to give OMP the reputation of being weaker than BP.
This was further increased by the advent of Compressed Sensing (CS), [4], which can be seen as sparse approximation with design freedom for the dictionary. While for BP-type schemes in combination with randomly chosen dictionaries strong results appeared very early, [2, 1], comparable results for OMP and its variants took longer to develop and are weaker in general, [8, 12]. Still, thanks to its computational advantages and flexibility, e.g. concerning the stopping criteria, OMP remained popular in signal processing - the only difference being that users had a defensive statement a la ’of course BP will perform even better’ ready at all times.
Contribution: Here we will provide the long missing analysis of the average performance of OMP and show that on average neither BP nor OMP are stronger, but confirm folklore wisdom, that OMP works better for signals with decaying coefficients while BP is better for equal sized coefficients. The idea that the performance of OMP improves for decaying coefficients has already been used in [10] and the simplified result states that if the sorted absolute coefficients form a geometric sequence with decay , then OMP is guaranteed to succeed for all sparsity levels with . Replacing certainty with high probability we will relax this bound by an order of magnitude to .
Organisation: We will first address full support recovery in the noiseless case and then extend this result to partial support recovery in the noisy case. In Section 4 we will conduct two experiments showing that our theoretical results accurately predict the average performance, before finally discussing our results and future work.

2 Noiseless Case

We start with the simple case of signals following the model in (3). Note that from [16] we know that for a randomly chosen subset (permutation ) the condition is satisfied with high probability as long as .

Assume that the signals follow the model in (3) and that for the coefficients satisfy for . Then, except with probability , OMP will recover the full support as long as

(4)

Before presenting the proof, we want to provide some background information on the ideas used for proving success of OMP and the difficulties associated with an average case analysis. A necessary and sufficient condition for a step of OMP to succeed is that for the current (correct) sub-support we have

(5)

Thus a sufficient condition for OMP to fully recover the support is, that for all possible sub-supports the missing atom which has the largest coefficient satisfies

(6)

If the coefficients have random signs then for all the inner products should concentrate around their expectation,

(7)

so a condition of the form should ensure success with high probability. The problem is that there are sub-supports for which we need to have this concentration. So taking a union bound for the probability of not enough concentration over all sub-supports, we get back the worst case condition but with a non-zero failure probability. The immediate conclusion is that in order to get a useful average case result, we have to reduce the number of intermediate supports that we need to control. For equally sized coefficients this is impossible, since the random signs determine the order of the absolute inner products. However, if the coefficients exhibit some decay, there is a natural order and it is more likely that atoms with large coefficients are picked first. For instance, with sufficient decay it might happen that the atom with the second largest coefficient is picked before that with the largest, but very unlikely that the atom with the smallest coefficient is picked first. The idea of the proof is that OMP will only pick ’sensible’ sub-supports, so we only need to ensure concentration for a much smaller number of them. The amount of concentration needed can then be further reduced by a pooling ’sensible’ supports of the same type and combining probabilistic and deterministic bounds.
We use the following short hands as well as for the residual based on an index set and for the signed coefficients, . In order to better understand the various bounds of terms involving , we recommend a quick familiarisation with Lemma 6.2, [9]. Further we will assume w.l.o.g., that is, by reordering the dictionary matrix, that . We now define the following sets for and , to be specified later,

(8)
(9)
(10)

We call a sub-support admissible if there exist and such that . We write .
A sufficient condition for OMP to succeed is that it only picks admissible sub-supports. Assuming is admissible, OMP picks another admissible support if (suff. cond.)

(11)

Since is admissible the residual has the form

(12)

and we have for , the index of the largest missing coefficient,

(13)

Using the identities and , to further split the last term above into,

leads to

(14)

Similarly we have for all

where we have used the shorthand .
We first bound the terms involving . So for all and , respectively, we have

(15)
(16)

Next we bound the terms involving . For we have

(17)

as well as

(18)

We bound the remaining terms with high probability using Hoeffding’s inequality. For we have

To bound we use that

(19)

Setting , we get

(20)

Combining all our estimates we get that except with probability

we have for all

We now determine the sets and by choosing to get the following bounds. For all we have

(21)

as well as and

Using and setting , for all , we get except with probability for all ,

(22)

In case the deterministic analysis holds and the theorem is trivially true. If conversely , then and so (4) implies that the expression above is larger than zero, which further implies that OMP will pick another admissible sub-support. Taking a union bound over all possible sets we get that OMP will succeed with probability at least as long as (4) holds. First note that the theorem only improves over the worst case analysis when . However, if conversely , then the ratio between largest and smallest coefficient is of the order , so thresholding should still have a good success probability.
To get a better feeling for the quality of the theorem we next specialise it to the case , where the coefficients form a sub-geometric sequence with parameter , meaning . In this case the theorem essentially says that OMP will recover the support except with probability as long as and . Comparing this to the condition for BP, , for failure probability , we see that OMP has the advantage that the admissible sparsity level has a milder dependence on the dictionary size and success probability while BP has the advantage of being independent of the coefficient decay. This means that each algorithm can outperform the other depending on the setting. Before confirming this in the numerical simulation in Section 4 we first have a look at the performance of OMP in a noisy setting.

3 Noisy Case

We next study partial support recovery, when the sparse signals are contaminated with noise and are modelled as

(23)

with as in the previous section and

a sub-Gaussian noise vector with parameter

. This means that and that for all unit vectors and the marginals satisfy .
For Gaussian noise the parameter

corresponds to the standard deviation and so for normalised coefficient sequences

the signal to noise ratio is . Similar bounds also hold in the general case, [11]. In the noisy setting we clearly cannot recover coefficients below the noise level so with more decay there will be a trade off between allowing to recover more atoms and decreasing the coefficients faster. Assume that the signals follow the model in (23) and that for the coefficients satisfy , . Then OMP will recover an atom from the support in the first steps, except with probability , as long as,

(24)
(25)

We use the same approach as before, assuming w.l.o.g. , but take into account the new expression for the residuals

(26)

and inner products . If is an admissible sub-support, for , then a sufficient condition for OMP to pick another admissible support in the noisy case is that for all we have

This means that we need to bound . Using the decomposition we can rewrite

(27)

To bound with high probability we use the sub-Gaussian property of for the marginals , with . For this leads to

where we have used that . We substitute this bound into (27) and use the Cauchy-Schwarz inequality as well as to get, except with probability ,

(28)

Using the intermediate expression in (22) as well as (28) with and setting we get, except with probability , for all with and all that

(29)

To get to the final statement observe that the conditions in (24)/(25) imply that (29) is larger than zero, which in turn implies recovery of a correct atom in the first steps.

4 Numerical Simulations

(a) (b) (c)
(d) (e) (f)
Figure 1: Percentage of correctly recovered supports for noiseless signals with various sparsity and coefficient decay parameters via BP (a,d), OMP (b,e) and thresholding (c,f) in the Dirac-DCT dictionary (a,b,c) and the Dirac-DCT-random dictionary (d,e,f).
(a) (b) (c)
(d) (e) (f)
(g) (h)
Figure 2: Percentage of correctly recovered atoms before recovery of first wrong atom via OMP for signals with various sparsity levels and coefficient decay parameter contaminated with no noise (a,d) or Gaussian noise corresponding to (b,e) and (c,f) in the Dirac-DCT dictionary (a,b,c) and the Dirac-DCT-random dictionary (d,e,f), as well as the percentage of correctly recoverable atoms for and (g,h).

In the first experiment we draw permutations and sign sequences and for each pair count how often BP, OMP and thresholding can recover the full support from the corresponding signals. From the results in Figure 1 we can see that the success region of OMP is indeed a union of two areas, one derived from the worst case analysis, , and one from the average case analysis . In particular, we can see the linear dependence of the breakdown sparsity level on the parameter . We can also see that the success of BP is not influenced by the coefficient decay and that, as indicated by theory, neither BP nor OMP is better in general but that each of them is better in a certain region. Finally, observe that the price you have to pay for the computational lightness of thresholding is the very limited range of parameters, where it is performing well.
In the second experiment we additionally draw noise-vectors to create the signals. For each signal we count how many atoms OMP identifies correctly before recovering the first incorrect atom. Figure 2 shows the average over all realisations divided by the correct sparsity level for both dictionaries and three noise levels, as well as the relative number of recoverable atoms for the two non-zero noise levels (g, h), meaning the number of coefficients above the noise level . Comparing to the success rates in the noiseless case, we can clearly see the overlay of the two effects; for small coefficient decay we recover as many atoms as in the noiseless case, while for large decay we recover all atoms with coefficients above the noise level.

5 Discussion

We have shown that OMP is successful with high probability if the coefficients exhibit decay and in such settings can even outperform BP. In particular, for geometric sequences with parameter the admissible sparsity level scales as . Our next goal is to extend the results to OMP using a perturbed dictionary, which is a necessary step to help tackle dictionary learning algorithms like K-SVD theoretically. We are also interested in deriving average case results for other algorithms such as stagewise OMP, [5], which picks more than one atom in each round, or Hard Thresholding Pursuit, [6], an iterative thresholding scheme. Both these algorithms can be computationally more efficient due to using less iterations and a theoretical analysis might allow the design of hybrids that automatically adapt to the decay, retain the computational efficiency and allow for multiple stopping criteria.

This work was supported by the Austrian Science Fund (FWF) under Grant no. Y760. The computational results presented have been achieved (in part) using the HPC infrastructure LEO of the University of Innsbruck. Finally, many, many, many thanks go to M.C. Pali for proof-reading the manuscript.

References

  • Candès et al. [2006] E. Candès, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, 2006.
  • Candès and Tao [2006] Emmanuel J. Candès and Terence Tao. Near optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inform. Theory, 52(12):5406–5425, 2006.
  • Donoho and Elad [2003] D. Donoho and M. Elad. Optimally sparse representation in general (non-orthogonal) dictionaries via minimization. Proc. Nat. Aca. Sci.,, 100(5):2197–2202, March 2003.
  • Donoho [2006] D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, 2006.
  • Donoho et al. [2012] D.L. Donoho, Y. Tsaig, I. Drori, and J.L. Starck. Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE Transactions on Information Theory, 58(2):1094–1121, 2012.
  • Foucart [2011] S. Foucart. Hard thresholding pursuit: An algorithm for compressive sensing. SIAM Journal on Numerical Analysis, 49(6):2543–2563, 2011.
  • Fuchs [1997] J. J. Fuchs. Extension of the pisarenko method to sparse linear arrays. IEEE Transactions on Signal Processing, 45(2413-2421), October 1997.
  • Gilbert and Tropp [2007] A.C. Gilbert and J.A. Tropp. Signal recovery from random measurements via Orthogonal Matching Pursuit. IEEE Transactions on Information Theory, 53(12):4655–4666, December 2007.
  • Gribonval et al. [2008] R. Gribonval, H. Rauhut, K. Schnass, and P. Vandergheynst. Atoms of all channels, unite! Average case analysis of multi-channel sparse recovery using greedy algorithms. Journal of Fourier Analysis and Applications, 14(5):655–687, 2008.
  • Herzet et al. [2016] C. Herzet, A. Drémeau, and C. Soussen. Relaxed recovery conditions for OMP/OLS by exploiting both coherence and decay. IEEE Transactions on Information Theory, 62(1):459 – 470, 2016.
  • Hsu et al. [2012] D. Hsu, S.M. Kakade, and T. Zhang. A tail inequality for quadratic forms of subgaussian random vectors. Electronic Communications in Probability (arXiv:1110.2842), 17(14), 2012.
  • Needell and Vershynin [DOI: 10.1007/s10208-008-9031-3] D. Needell and R. Vershynin. Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit. Foundations of Computational Mathematics, DOI: 10.1007/s10208-008-9031-3.
  • Pati et al. [1993] Y. Pati, R. Rezaiifar, and P. Krishnaprasad. Orthogonal Matching Pursuit: recursive function approximation with application to wavelet decomposition. In Asilomar Conf. on Signals Systems and Comput., 1993.
  • Schnass and Vandergheynst [2007] K. Schnass and P. Vandergheynst. Average performance analysis for thresholding. IEEE Signal Processing Letters, 14(11):828–831, 2007.
  • Tropp [2004] J.A. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information Theory, 50(10):2231–2242, October 2004.
  • Tropp [2008] J.A. Tropp. On the conditioning of random subdictionaries. Applied and Computational Harmonic Analysis, 25(1-24), 2008.