Conditioning of restricted Fourier matrices and super-resolution of MUSIC

05/02/2019 ∙ by Weilin Li, et al. ∙ NYU college Georgia Institute of Technology 0

This paper studies stable recovery of a collection of point sources from its noisy M+1 low-frequency Fourier coefficients. We focus on the super-resolution regime where the minimum separation of the point sources is below 1/M. We propose a separated clumps model where point sources are clustered in far apart sets, and prove an accurate lower bound of the Fourier matrix with nodes restricted to the source locations. This estimate gives rise to a theoretical analysis on the super-resolution limit of the MUSIC algorithm.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In imaging and signal processing, point sources are usually represented by a discrete measure: where represents the source amplitudes and represents the source locations. A uniform array of sensors collects the noisy Fourier coefficients of , denoted by . One can write


where is the Fourier or Vandermonde matrix (with nodes on the unit circle):

and represents noise.

Our goal is to accurately recover , especially the support , from . The measurements contains information about at a coarse resolution of approximately , whereas we would like to estimate with a higher resolution. In the noiseless setting where , the measure can be exactly recovered by many methods. With noise, the stability of this inverse problem depends on . A crucial quantity is the minimum separation between the two closest points in , defined as

where is the metric on the torus . In imaging, is regarded as the standard resolution. As a manifestation of the Heisenberg uncertainty principle, recovery is sensitive to noise whenever , which case is referred as super-resolution. The super-resolution factor (SRF) is , standing for the maximum number of points in that is contained in an interval of length .

Prior mathematical work on super-resolution can be placed in three main categories: (a) the min-max error of super-resolution was studied in [1, 2] when point sources are on a fine grid of ; (b) when is well-separated such that for some constant , some representative methods include total variation minimization (TV-min) [3, 4, 5], greedy algorithms [6], and subspace methods [7, 8]. These results address the issue of discretization error [6] arising in sparse recovery, but they do not always succeed when ; (c) when , certain assumptions on the signs of are required by many optimization-based methods [9, 10, 11]. Alternatively, subspace methods exploit a low-rank factorization of the data and can recover complex measures, but there are many unanswered questions related to its stability that we would like to address.

This paper focuses on a highly celebrated subspace method, called MUltiple SIgnal Classification (MUSIC) [12]. An important open problem is to understand the super-resolution limit of MUSIC: characterize the support sets and noise level for which MUSIC can stably recover all measures supported in within a prescribed accuracy. Prior numerical experiments in [7] showed that MUSIC can succeed even when , but a rigorous justification was not provided. This is one of our main motivations for the theory presented in this paper and in our more detailed preprint [13].

As a result of Wedin’s theorem [17, 18], the stability of MUSIC obeys, in an informal manner,

where ,

is the smallest non-zero singular value of

, and is a quantity depending on noise. Therefore, MUSIC can accurately estimate provided that the noise term is sufficiently small compared to the noise amplification factor which depends crucially on .

In the separated case , accurate estimates for and are known [14, 15, 8, 7]. In the super-resolution regime , the value of is extremely sensitive to the “geometry” or configuration of , and a more sophisticated description of the “geometry” of other than the minimum separation is required. Based on this observation, we define a separated clumps model where consists of well-separated subsets, where each subset contains several closely spaced points. This situation occurs naturally in applications where point sources clustered in far apart sets.

Under this separated clumps model, we provide a lower bound of with the dominant term scaling like , where is the cardinality of the largest clump. This is a significant improvement on existing lower bounds with continuous measurements where the exponents depend on the total sparsity [1, 2]. We use this estimate to rigorously establish the resolution limit of MUSIC and explain numerical results. More comprehensive explanations, comparisons, simulations, and proofs can be found in [13].

Ii Minimum singular value of Vandermonde matrices

We first define a geometric model of where the point sources are clustered into far apart clumps.

Assumption 1 (Separated clumps model).

Let and be a positive integers and have cardinality . We say that consists of separated clumps with parameters if the following hold.

  1. can be written as the union of disjoint sets , where each clump is contained in an interval of length .

  2. with where is the cardinality of .

  3. If , then the distance between any two clumps is at least .

There are many types of discrete sets that consist of separated clumps. Extreme examples include when is a single clump containing all points, and when consists of clumps containing a single point. While our theory applies to both extremes, the in-between case where consists of several clumps each of modest size is the most interesting, and developing a theory of super-resolution for this case has turned out to be quite challenging.

Under this separated clumps model, we expect to be an aggregate of terms, where each term only depends on the “geometry” of each clump.

Theorem 1.

Let . Assume satisfies Assumption 1 with parameters for some and


Then there exist explicit constants such that


The main feature of this theorem is the exponent on , which depends on the cardinality of each clump as opposed to the total number of points. Let be the cardinality of the largest clump: . Theorem 1 implies


Previous results [1, 2] strongly suggest (we avoid using “imply” because they studied a similar inverse problem but with continuous, rather than discrete measurements like the ones considered here) that


By comparing the inequalities (II.3) and (II.4), we see that our lower bound is dramatically better when all of the point sources are not located within a single clump. These results are also consistent with our intuition that is smallest when consists of closely spaced points; more details about this can be found in [13]. In [16], a lower bound of is derived for a model called clustered nodes; a detail comparison between Theorem 1 and results in [16] can be found in [13].

The following theorem provides an upper bound on when contains consecutive points spaced by , and this shows that the dependence on SRF in inequality (II.3) is optimal.

Theorem 2.

Let , and there exists a constant depending only on such that the following hold: for any , and of cardinality that contains the set , we have .

Iii MUSIC and its super-resolution limit

In signal processing, the MUSIC algorithm [12], has been widely used due to its superior numerical performance among subspace methods. MUSIC relies upon the Vandermonde decomposition of a Hankel matrix, and its stability to noise can be formulated as a matrix perturbation problem.

Throughout the following exposition, we assume that is an integer satisfying the inequalities . The Hankel matrix of is

If we denote the noiseless measurement vector by

, then it is straightforward to verify that we have the following Vandermonde decomposition

Observe that both and have full column rank when and that has rank

. The Singular Value Decomposition (SVD) of

is of the form

where are the non-zero singular values of . The columns of and span and respectively, which are called the signal space and the noise space.

0:  , sparsity , .
1:  Form Hankel matrix
2:  Compute the SVD of :
where , .
3:  Compute the imaging function .
3:   corresponding to the largest local maxima of .
Algorithm 1 MUltiple SIgnal Classification

For any and positive integer , we define the steering vector of length to be

MUSIC is based on the following observation that

Table I: Functions in the MUSIC algorithm
Noise-space correlation function Imaging function Noiseless Noisy

This observation can be reformulated in terms of the noise-space correlation function and the imaging function (see Table I for their definitions), as summarized in the following lemma.

Lemma 1.

Let . Then

To summarize this discussion: in the noiseless case where we have access to , the source locations can be exactly identified through the zeros of the noise-space correlation function or the peaks of the imaging function .

In the presence of noise, we only have access to , which is a perturbation of :

The noise-space correlation and imaging functions are perturbed to and respectively. Stability of MUSIC depends on the perturbation of the noise-space correlation function from to which we measure by

By using Wedin’s theorem [17, 18, Theorem 3.4], we can prove the following perturbation bound.

Proposition 1.

Let . Suppose . Then

If is independent Gaussian noise, i.e., , the spectral norm of satisfies the following concentration inequality [19, Theorem 4]:

Lemma 2.

If , then

for , and .

Combining Proposition 1, Lemma 2 and Theorem 1 gives rise to a stability analysis of MUSIC:

Theorem 3.

Let be an even integer satisfying and set . Fix parameters , , and let . Assume satisfies Assumption 1 with parameters for some and satisfying (II.1). There exist explicit constants such that if

then with probability no less than


In order to guarantee an -perturbation of the noise-space correlation function, the noise-to-signal ratio should follow the scaling law

Let be the cardinality of the largest clump. By (II.3), this scaling law reduces to

The resolution limit of MUSIC is exponential in , but the exponent only depends on the cardinality of the separated clumps instead of the total sparsity . These estimates are verified by numerical experiments in [13].


Wenjing Liao is supported by NSF-DMS-1818751.


  • [1] D. L. Donoho, “Superresolution via sparsity constraints,” SIAM Journal on Mathematical Analysis, vol. 23, no. 5, pp. 1309–1331, 1992.
  • [2] L. Demanet and N. Nguyen, “The recoverability limit for superresolution via sparsity,” arXiv preprint arXiv:1502.01385, 2015.
  • [3] E. J. Candès and C. Fernandez-Granda, “Super-resolution from noisy data,” Journal of Fourier Analysis and Applications, vol. 19, no. 6, pp. 1229–1254, 2013.
  • [4] V. Duval and G. Peyré, “Exact support recovery for sparse spikes deconvolution,” Foundations of Computational Mathematics, vol. 15, no. 5, pp. 1315–1355, 2015.
  • [5] W. Li, “Elementary error estimates for super-resolution de-noising,” arXiv preprint arXiv:1702.03021, 2017.
  • [6] A. C. Fannjiang and W. Liao, “Coherence-pattern guided compressive sensing with unresolved grids,” SIAM Journal on Imaging Sciences, vol. 5, no. 1, pp. 179–202, 2012.
  • [7] W. Liao and A. Fannjiang, “MUSIC for single-snapshot spectral estimation: Stability and super-resolution,” Applied and Computational Harmonic Analysis, vol. 40, no. 1, pp. 33–67, 2016.
  • [8] A. Moitra, “Super-resolution, extremal functions and the condition number of Vandermonde matrices,”

    Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing

    , 2015.
  • [9] V. I. Morgenshtern and E. J. Candes, “Super-resolution of positive sources: The discrete setup,” SIAM Journal on Imaging Sciences, vol. 9, no. 1, pp. 412–444, 2016.
  • [10] Q. Denoyelle, V. Duval, and G. Peyré, “Support recovery for sparse super-resolution of positive measures,” Journal of Fourier Analysis and Applications, vol. 23, no. 5, pp. 1153–1194, 2017.
  • [11] J. J. Benedetto and W. Li, “Super-resolution by means of Beurling minimal extrapolation,” Applied and Computational Harmonic Analysis, 2018.
  • [12] R. Schmidt, “Multiple emitter location and signal parameter estimation,” IEEE Transactions on Antennas and Propagation, vol. 34, no. 3, pp. 276–280, 1986.
  • [13] W. Li and W. Liao, “Stable super-resolution limit and smallest singular value of restricted fourier matrices,” arXiv preprint arXiv:1709.03146, 2017.
  • [14] H. L. Montgomery and R. C. Vaughan, “Hilbert’s inequality,” Journal of the London Mathematical Society, vol. 2, no. 1, pp. 73–82, 1974.
  • [15] J. D. Vaaler, “Some extremal functions in Fourier analysis,” Bulletin of the American Mathematical Society, vol. 12, no. 2, pp. 183–216, 1985.
  • [16] D. Batenkov, L. Demanet, G. Goldman, and Y. Yomdin, “Stability of partial Fourier matrices with clustered nodes,” arXiv preprint arXiv:1809.00658, 2018.
  • [17] P.-Å. Wedin, “Perturbation bounds in connection with singular value decomposition,” BIT Numerical Mathematics, vol. 12, no. 1, pp. 99–111, 1972.
  • [18]

    R.-C. Li, “Relative perturbation theory: II. Eigenspace and singular subspace variations,”

    SIAM Journal on Matrix Analysis and Applications, vol. 20, no. 2, pp. 471–492, 1998.
  • [19] W. Liao, “Music for multidimensional spectral estimation: stability and super-resolution,” IEEE Transactions on Signal Processing, vol. 63, no. 23, pp. 6395–6406, 2015.