Super-resolution limit of the ESPRIT algorithm

05/03/2019 ∙ by Weilin Li, et al. ∙ University of California-Davis NYU college Georgia Institute of Technology 0

The problem of imaging point objects can be formulated as estimation of an unknown atomic measure from its M+1 consecutive noisy Fourier coefficients. The standard resolution of this inverse problem is 1/M and super-resolution refers to the capability of resolving atoms at a higher resolution. When any two atoms are less than 1/M apart, this recovery problem is highly challenging and many existing algorithms either cannot deal with this situation or require restrictive assumptions on the sign of the measure. ESPRIT (Estimation of Signal Parameters via Rotation Invariance Techniques) is an efficient method that does not depend on the sign of the measure and this paper provides a stability analysis: we prove an explicit upper bound on the recovery error of ESPRIT in terms of the minimum singular value of Vandermonde matrices. Using prior estimates for the minimum singular value explains its resolution limit -- when the support of μ consists of multiple well-separated clumps, the noise level that ESPRIT can tolerate scales like SRF^-(2λ -2), where the super-resolution factor SRF governs the difficulty of the problem and λ is the cardinality of the largest clump. Our theory is validated by numerical experiments.



There are no comments yet.


page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Background and motivation

Many imaging problems involve detection of point objects from Fourier measurements. Such inverse problems arise in many interesting applications in imaging and signal processing, including Direction-Of-Arrival (DOA) estimation [20, 36], inverse source and inverse scattering [16, 14, 13], and time series analysis [37]. The problem can be formulated as spectral estimation - estimating an unknown discrete measure consisting of a collection of Dirac delta functions, from its noisy low-frequency Fourier coefficients.

The first solution to spectral estimation can be traced back to Prony [33]. Unfortunately, the Prony’s method is numerically unstable and numerous modifications have been attempted to improve its numerical behavior. In the signal processing community, a class of subspace methods achieved major breakthroughs for the DOA estimation. Important representative subspace methods are MUSIC (MUltiple SIgnal Classification) [36], ESPRIT (Estimation of Signal Parameters via Rotation Invariance Techniques) [34], and the matrix pencil method [19]. MUSIC was one of the first robust methods that gives a high-resolution recovery but its computational cost is high. This drawback motivated the development of more efficient algorithms such as ESPRIT and the matrix pencil method. These methods have been widely used in applications due to their high-resolution recovery – they are capable of resolving fine details in [20].

A central interest on the mathematical theory of super-resolution is to understand how to stably estimate when there are closely spaced atoms in . Let be the minimum separation of , which is defined as the distance between the two closest atoms in the support of . Suppose consecutive noisy Fourier coefficients of are collected. The standard resolution of this inverse problem is , which is the threshold predicted by the Heisenberg uncertainty principle. Super-resolution estimation refers to the case where is significantly smaller than . In this situation the recovery is very sensitive to noise.

In recent years, the theory of super-resolution has garnered considerable attention partly due to the invention of a new family of convex minimization methods for this problem, see [8, 7, 41, 2, 12, 24]. While they are successful when for a reasonably small , they can potentially fail when , even in the noiseless regime. For these algorithms to succeed when , one requires that is non-negative [32, 35], or more generally, the sign of its atoms satisfies certain algebraic criteria [6]. Hence, it appears that an entirely different approach is required to deal with the case of closely spaced atoms with arbitrary complex phases that are pertinent to many applications.

Subspace methods like MUSIC and ESPRIT are considerably different from the aforementioned convex approaches. First, they do not involve convex optimization. Second, they provide exact recovery when there is no noise, regardless of the location of the atoms, as long as the number of measurements is at least twice the number of atoms. Third, numerical evidence has demonstrated that they can accurately estimate with arbitrarily complex phases, even when , provided that the noise level is sufficiently small. In other words, MUSIC and ESPRIT have super-resolution capabilities, regardless of the sign of .

An interesting question is to quantify the resolution limit of MUSIC and ESPRIT – conditions on and the noise level for which they can recover up to a prescribed error. The answer is not straightforward, as simple numerical experiments show that the stability of MUSIC and ESPRIT heavily depends on how the support of is arranged. In our earlier works [25, 26], we introduced a separated clumps model to allow for atoms clustered in far apart sets, and proved accurate estimates on the minimum singular values of the Vandermonde matrices with nodes satisfying this separated clumps model. The super-resolution limit of MUSIC was studied in [26].

This paper focuses on the robustness of ESPRIT. Although this method was invented over a quarter century ago, an accurate analysis of its super-resolution limit has been elusive. One main complication is that, one must estimate the minimum singular value of two structured matrices that appear in ESPRIT. One is a rectangular Vandermonde matrix with nodes on the unit disk and the other one arises particularly from ESPRIT. The minimum singular value of the rectangular Vandermonde matrix with nodes closely spaced on the unit disk was not addressed till recent works in [3, 25, 21]. To study the second matrix, one needs to exploit its structure, but little research was done in this direction. In the process of studying these matrices, we discover that ESPRIT implicitly leverages an uncertainty principle for non-harmonic Fourier series. This connection has not been previously discovered, and is the key piece that allows us to provide an accurate analysis for the super-resolution limit of ESPRIT.

1.2 Contributions and outline

ESPRIT has been empirically observed to be robust to noise and is capable of super-resolving atoms with arbitrary spacing and phases, provided that the noise is sufficiently small. This paper rigorously derives the error bound of ESPRIT, and proves the resolution limit of ESPRIT under a geometric model for the unknown support. We review the method and introduce the necessary notation in Section 2.

Let be the support of the unknown measure and be the output of ESPRIT. Theorem 1 upper bounds , the matching distance between and , in terms of the noise level and the minimum singular value of the Vandermonde matrices whose nodes are determined by . Our bound significantly improves upon existing ones especially when and numerical evidence demonstrates that it provides an accurate dependence on the minimum singular value. Theorem 1 is deterministic, non-asymptotic and holds regardless of the support of . All constants are explicitly given. This part of the analysis can be found in Section 3.

We next combine Theorem 1 with bounds on the minimum singular value of Vandermonde matrices under a separated clumps model [25, 26] to obtain Theorem 3. This is the first known rigorous guarantee for ESPRIT in the regime. To provide the reader with an example of such an estimate, we define the super-resolution factor , which can be interpreted as the maximum number of atoms located within an interval of length . For a fixed accuracy , if the noise level is smaller than , where is the largest cardinality in a single clump of , then . Our theory is validated by numerical experiments. Results of this nature can be found in Section 4.

A crucial step in our analysis is the derivation of a lower bound, uniformly in , on the minimum singular value of certain matrices that appear in ESPRIT. We show that this problem is equivalent to proving a certain uncertainty principle for discrete non-harmonic Fourier series. We establish the latter result in Theorem 5, which might be of independent mathematical interest. The uncertainty principle results are explained and proved in Section 5, which can be read independently of the rest of the paper. We are amazed that ESPRIT implicitly leverages an uncertainty principle.

Finally, we prove the results stated in Sections 3 and 4 in Appendix A.

1.3 Related work

MUSIC and ESPRIT were originally invented for DOA estimation where the amplitudes of are assumed to be random and multiple snapshots of measurements are taken. In this “multiple snapshot” setting, more information about is collected, and statistics about the amplitudes of can be utilized. Sensitivity of MUSIC and ESPRIT for the DOA estimation was studied in [38, 39, 40, 22]. This paper focuses on the “single snapshot” setting where the amplitudes of are deterministic and little statistical information can be utilized.

Regarding the stability analysis of subspace methods, there have been works on bounding the error in terms of the minimum singular value of Vandermonde matrices. Such inequalities can be found in [28, 25] for MUSIC and in [15, 1] for ESPRIT, as well as in [30] for the matrix pencil method. One major roadblock for this approach is that accurate bounds for the smallest singular value in the regime were not readily available. This difficulty was addressed in [25], which provided the first accurate analysis of MUSIC in the regime. As for ESPRIT, the bounds in [15, 1] do not capture the exact dependence of the error on the minimum singular value, and consequently, are inaccurate when (see (3.6) and (3.7) and the discussion there).

The minimum singular value of a Vandermonde matrix highly depends on the configuration of its nodes. The best available bound for the case was provided in [30], which relied on the Beurling-Selberg machinery, see [43]. Recently there are several independent works which provide estimates for by incorporating additional geometric information about the support set, see [3, 25, 21]. Accurate lower bounds under a clumps model can be found in [25].

Prior works [10, 9] addressed super-resolution from an information theoretic view. They considered the situation where the atoms are located on a grid on with spacing and the given information consists of noisy continuous Fourier measurements. Both papers derived lower and upper bounds for a min-max error. These results showed that minimization is optimal for this discrete model, but it is not computationally feasible. ESPRIT is a polynomial-time algorithm that can be used for the more general “off-the-grid-model”. The investigation of the optimality of ESPRIT is an interesting future research direction.

2 Review of ESPRIT

We first describe the spectral estimation problem. Let be the collection of non-zero and complex-valued discrete measures on the periodic unit interval with at most atoms, and let denote the Dirac measure supported in . Any is of the form,


The minimum separation of is defined as


Let denote the first consecutive Fourier coefficients of :


Suppose we are given information about in the form of consecutive noisy Fourier coefficients,



represents some unknown noise vector. The

Fourier or Vandermonde matrix whose nodes are specified by is denoted


If has amplitudes and support , then we have the relationship


The goal of spectral estimation is to stably recover , including the support and the amplitudes , from

. A typical two-step strategy is to estimate the support set and then the amplitudes. ESPRIT exploits the Vandermonde decomposition of a Hankel matrix in order to reformulate the support estimation step as an eigenvalue problem. Throughout the exposition,

is an integer parameter for ESPRIT that satisfies


Note that it always possible to find a that satisfies the above inequalities whenever the number of measurements exceeds the amount of unknowns: . The Hankel matrix of (with parameter ) is defined to be


We first describe ESPRIT in the noiseless setting and then outline how it deals with noise. In the case where , we have access to the Hankel matrix , and a direct calculation shows that processes the following Vandermonde decomposition:


where . The conditions in (2.7) imply that both and have full column rank, which in turn implies that has rank . More importantly, we have , which means contains full information about the column span of . ESPRIT amounts to finding an orthonormal basis of and using this basis to recover . The procedure of finding an orthonormal basis of

can be realized by Singular Value Decomposition (SVD) or QR decomposition of


Let the SVD of be


Comparing the identities (2.9) and (2.10), we see that the column space of and are identical to

. There exists an invertible matrix

such that

Let and be two submatrices of containing the first and the last rows respectively. Then we have


where . Setting as (2.7) guarantees that and have full column rank.

It follows from these definitions that if we define the matrix , then

Hence, the eigenvalues of are exactly . The ESPRIT technique amounts to finding the support set through the eigenvalues of .

In the presence of noise, the ESPRIT algorithm forms the noisy Hankel matrix

If the noise is sufficiently small, then the rank of is at least . ESPRIT computes a matrix , defined to be the best rank approximation of in the spectral norm; this amounts to computing the SVD of and truncating the singular spaces. We write the SVD of in (2.13).

When the size of the noise is sufficiently small, we expect the column space of to be a small perturbation of that of . The ESPRIT algorithm proposes to find the eigenvalues of

where and are the first and last rows of respectively. Projecting the eigenvalues to the complex unit circle provide us with an estimator for . Further details can be found in Algorithm 1.

0:  , sparsity ,
  1. Form Hankel matrix

  2. Compute the SVD of :


    where are the singular values of .

  1. Let and be two submatrices of containing the first and the last rows respectively. Compute

    and its eigenvalues .

  2. where .

Algorithm 1 ESPRIT

3 Robustness of ESPRIT

A central interest about the ESPRIT algorithm is on it stability analysis. This main goal of this section is to bound the error between and in terms of the matrices that appear in the ESPRIT algorithm. All of the results in this section are proved in Appendix A.

Before we proceed to the stability analysis, we need to point out a subtle and important feature of ESPRIT. The singular values and singular subspaces of a matrix are unique, but the SVD only provides us with one of infinitely many equivalent orthonormal bases. Importantly, ESPRIT is invariant to the specific choice of orthonormal basis for the column span of . In other words, the eigenvalues of remain the same if one uses another orthonormal basis for the column span of . To see why, let be another orthonormal basis for the column span of . Then there exists an invertible matrix , such that . Let and be two submatrices of containing the first and the last rows respectively. Then and . It follows that , so the eigenvalues of are identical to those of .

It follows from the above observation that we can make the following reduction. The output of ESPRIT is independent of the particular choice of basis for the singular spaces, so for the mathematical analysis, we can without loss of generality, select particular matrices and that are most suitable for our analysis. It turns out that the most convenient choice is when the columns of and align in a proper way, which we describe below.

Let be the canonical angles between the subspaces spanned by the columns of and . Since ESPRIT is invariant to the choice of orthonormal basis, when we write and , we refer to the specific choice of bases for which their columns consist of the canonical vectors 111 In other words, we let and , and assume


The first perturbation bound about follows from the Wedin’s theorem [45, 23]:

Lemma 1.

Fix positive integers such that and satisfies (2.7). For any and such that , we have

Lemma 1 shows that the column spaces of and are close when the noise is sufficiently small. Under the assumptions in (3.1), we have the following perturbation bound on and .

Lemma 2.

Fix positive integers such that and satisfies (2.7). For any and such that , we have

The matrix is diagonalizable and has eigenvalues . Note that the eigenvalues of might not have multiplicity one. Let be the eigenvalues of . The matching distance between and is defined to be

where is taken over all permutations of Thanks to the Bauer-Fike theorem, see [4] and [44, Theorem 3.3], we have


We then project each to the complex unit circle to obtain . Let , which is the output of ESPRIT. Note that is not necessarily a set because it may contain repeated entries, but we will see that if the noise is sufficiently small depending on , then necessarily consists of distinct values. We define the matching distance between and in the same fashion,

If , then we have the trial inequality

On the other hand, if , we have

The first inequality is a consequence of the law of sines, see Figure 3.1. The second inequality follows by observing that the function is concave on the domain and and , so for all . Thus, regardless of the value of , we always have the inequality




Figure 3.1: Geometric figure corresponding to inequality (3.3). Here, we define and . Note that . By the law of sines, we have . This implies .

Combining Lemma 2 and inequalities (3.2) and (3.3) gives rise to the following deterministic bound on the matching distance (see proof in Appendix A.2).

Theorem 1.

Fix positive integers such that and satisfies (2.7). For any and such that


the output of ESPRIT satisfies


At this point, it is not clear how useful Theorem 1 is because the upper bound on depends on the minimum singular values of – each of these matrices implicitly depend on , so their minimum singular values could be extraordinarily small for certain . Fortunately, the dependence of in terms of is now well-understood for many types of , see [30, 25], and we present these results in Section 4. The lower bounds for and are summarized in Proposition 1 below, and our proof requires the development of a non-harmonic uncertainty principle, which we discuss later in Section 5.

Proposition 1.

Fix positive integers such that and satisfies (2.7). For any , we have

Remark 1.

In the case of real-valued amplitudes , the lower bound can be improved as

Let us explain the significance of Proposition 1. The key to understanding the stability of ESPRIT is to obtain a sharp dependence on because as . The proposition shows that and are uniformly bounded over all , so if we set , then (3.5) is roughly

Here, the implicit constant is independent of but only depends on and . The numerical experiments show that Theorem 1 provides the best possible dependence on , see Figure 4.2.

We can compare our bound with earlier stability bounds ESPRIT:222The paper [15] analyzed a variation of the classical ESPRIT algorithm. That paper provided an error bound in terms of the Hausdorff distance, but it can be upgraded to the matching distance without additional assumptions or loss of accuracy.


We will later see that the exponent of is crucial since it determines the exponent of in the resolution analysis for ESPRIT.

Having seen that controlling and independently of is the key to understanding the robustness of ESPRIT, let us discuss the technical challenges that we faced. We first mention that, in general, deleting a row from a matrix with orthogonal columns could have catastrophic effects. For instance, the matrix has orthonormal columns, but if we delete its last row, the resulting matrix does not even have full rank. In the analysis of ESPRIT, we do not deal with arbitrary matrices with orthonormal columns because has column space equal to that of the Vandermonde matrix . While can be explicitly realized as the result of a Gram-Schmidt orthogonalization process applied to , it is hard to leverage this relationship in a theoretical form. Thus, all we have to work with is this abstract relationship between and , and the proof of Proposition 1 shows that controlling and is equivalent to an uncertainty principle.

To complete this section, we discuss how to estimate the amplitudes. The ESPRIT algorithm only provides us with an estimate for the support set. We can approximate the amplitudes using the least squares solution. Let , where has already been sorted to best match , and let .

Proposition 2.

Suppose the assumptions of Theorem 1 hold and that . Then

We next explain what Proposition 2

says at a heuristic level. When

is sufficiently small, is comparable to , which in turn, is comparable to because is often chosen to be approximately . These arguments can be made rigorous, but we omit the details for the sake of the explanation. Combining Theorem 1, Proposition 1, and Theorem 2 shows that there is an extra singular value term, , in the amplitude error, so that the amplitude error is approximately of the form

The factor in the first term is natural because the columns of are not normalized to have unit length and so has a scaling factor.

4 Super-resolution limit of ESPRIT

In signal processing, ESPRIT has been widely used due to its superior numerical performance among subspace methods. This section devotes to an analysis on the resolution limit of ESPRIT under a separated clumps model of [25, 26]. The statements that appear in this section are proved in Appendix A.

4.1 The minimum singular value of

We first define the separated clumps model (see [25, 26] for more details).

Assumption 1 (Separated clumps model).

Let and be a positive integers and have cardinality . We say that consists of separated clumps with parameters if the following hold.

  1. can be written as the union of disjoint sets , where each clump is contained in an interval of length .

  2. with where is the cardinality of .

  3. If , then the distance between any two clumps is at least .

Figure 4.1: where each contains 3 equally spaced atoms with spacing . The clumps are separated at least by .

An example of separated clumps is shown in Figure 4.1. In applications there are many types of discrete sets that consist of separated clumps. One extreme example is when is a single clump containing all points. This is considered to be the worst case configuration for in the sense that super-resolution will be highly sensitive to noise. Another extreme instance is when all points in are separated by , so we can think of as having clumps each containing single point. This is widely considered to be the best case scenario, in which super-resolution is least sensitive to noise. While our assumption applies to both extremes, the in-between case where consists of several clumps each of modest size is the most interesting, and developing a theory of super-resolution for this case is most challenging.

We proved in [25, 26] that, under this separated clumps model, is an aggregate of terms, where each term only depends on the “geometry” of each clump.

Theorem 2.

Fix positive integers and such that . Assume satisfies Assumption 1 with parameters for some and


Then there exist explicit constants such that


The main feature of this theorem are the exponents on , which depend on the cardinality of each clumps as opposed to the total number of points. Let be the cardinality of the largest clump: .

Theorem 2 implies the following bound (which is looser, but easier to digest)


Previous results [10, 9] strongly suggest333We avoid using the word “imply” because those papers studied a similar inverse problem but with continuous Fourier measurements, rather than discrete measurements, like the ones considered here. that


By comparing the inequalities (4.3) and (4.4), we see the former is dramatically better when all of the point sources are not located within a single clump. These results are also consistent with our intuition that is smallest when consists of closely spaced points; more details about this can be found in [25].

4.2 Super-resolution limit of ESPRIT

According to Theorem 1, the matching distance between and is proportional to . If is independent Gaussian noise, i.e., , the spectral norm of satisfies the following concentration inequality [27, Theorem 4]:

Proposition 3.

If and , then for any ,

Combining Theorem 1, Theorem 2 and Proposition 3 gives rise to a stability analysis of ESPRIT under the separated clumps model (see Appendix A.5 for the proof).

Theorem 3.

Fix positive integers and such that is even and set . Fix parameters and . Assume satisfies Assumption 1 with parameters for some and satisfying (4.1). There exist explicit constants such that the following hold.

For any with and any supported in with


the output

of ESPRIT satisfies, with probability no less than


In order to guarantee an -perturbation of the matching distance between the exact support and the recovered one, the noise-to-signal ratio should follow the scaling law


Let be the cardinality of the largest clump. By (4.3), this scaling law reduces to


We prove that, the resolution limit of ESPRIT is exponential in , but the exponent only depends on the cardinality of the separated clumps instead of the total sparsity . These estimates are verified by numerical experiments in Section 4.3.

4.3 Numerical simulations

We next perform numerical simulations to verify Theorem 1 and the scaling law in (4.6) that was predicted by Theorem 3. In our simulations, the true support contains or clumps () of equally spaced objects consecutively spaced by , while the clusters are separated at least by with (see Figure 4.3 (a) for an example). The coefficients have unit magnitudes and random phases. We set and let vary so that SRF varies. Noise is gaussian: .

The support error is measured by the matching distance . For each parameter setting, we randomly choose the phases of , and run the experiments times with random noises and the fixed amplitudes . The average support error for this is taken as the average of support matching distance within these experiments. In order to test ESPRIT’s capability of dealing with arbitrary complex phases, we then take the worst average support error over random phases.

4.3.1 Matching distance versus and SRF

Our first set of experiments is to verify Theorem 1, which proves the scaling law


By employing Theorem 2, the above can be rewritten as


The noise noise level is fixed this experiment. For each fixed , we let vary and record the average over experiments of random noises, for the worst random phases of . Figure 4.2 displays the log-log plot of the average versus and SRF for (a) and and (b) and . The curves appear to be straight lines and the slopes of these curves verifies the theoretical prediction given by the scaling laws (4.7) and (4.8).

Figure 4.2: These figures display the log-log plot of the average versus (the left column) and SRF (the right column) for in (a) and in (b).

4.3.2 Phase transition

Our second set of experiments is more comprehensive than our first set, because we allow both and to vary. We perform trials for each SRF and , and we include the performance of MUSIC to serve as a comparison.

at least

(a) clumps of objects
Figure 4.3: (b) and (c) displays the average over trials with respect to (x-axis) and (y-axis) when contains clusters of consecutively spaced objects: and .

Figure 4.3 (b) and (c) display the average value of over trials with respect to (x-axis) and (y-axis) when contains clumps of consecutively spaced atoms: and

. A clear phase transition demonstrates that MUSIC and ESPRIT are capable of resolving closely spaced

complex-valued objects as long as is below certain threshold depending on SRF. ESPRIT outperforms MUSIC as it can tolerate a larger amount of noise.

In Figure 4.4, we display the phase transition curves. We say the output of either MUSIC or ESPRIT is successful if