Inversion of trace formulas for a Sturm-Liouville operator

by   Xiang Xu, et al.
Zhejiang University

This paper revisits the classical problem "Can we hear the density of a string?", which can be formulated as an inverse spectral problem for a Sturm-Liouville operator. Based on inverting a sequence of trace formulas, we propose a new numerical scheme to reconstruct the density. Numerical experiments are presented to verify the validity and effectiveness of the numerical scheme.


page 1

page 2

page 3

page 4


An inverse spectral problem for a damped wave operator

This paper proposes a new and efficient numerical algorithm for recoveri...

Approximation of a fractional power of an elliptic operator

Some mathematical models of applied problems lead to the need of solving...

On the inverse scattering problem for radially-symmetric domains in two dimensions

In the present paper we describe a method for solving inverse problems f...

Sparse trace tests

We establish how the coefficients of a sparse polynomial system influenc...

An inversion formula with hypergeometric polynomials and application to singular integral operators

Given parameters x ∉R^- ∪{1} and ν, Re(ν) < 0, and the space H_0 of enti...

Local Multiple Traces Formulation for Electromagnetics: Stability and Preconditioning for Smooth Geometries

We consider the time-harmonic electromagnetic transmission problem for t...

A Robust Approach to ARMA Factor Modeling

This paper deals with the dynamic factor analysis problem for an ARMA pr...

1. Introduction

The main purpose of this paper is to propose a novel numerical scheme for some inverse spectral problems. The scheme can be generalized, but for simplicity will be presented with an ad hoc

algorithm for the classical Sturm-Liouville eigenvalue problem:


where and . Denote to be the generalized eigenvalues for the above problem. The inverse spectral problem is to recover the density from those eigenvalues . To avoid the nonuniqueness, we assume is even with respect to , that is .

If , then the Liouville transformation

reduces the inverse problem to the problem of recovering in


from the eigenvalues , where


The real-valued number can be recovered from spectral data [rundell1992reconstruction1]. And the uniqueness of in is guaranteed if is assumed to be symmetric about . For more details, we refer to [kirsch2011introduction] and the discussions therein. It is still notable that the problems for and might be slightly different. First, the recovery of from from the relation might need some additional information. Second, we need some regularity assumption on to do the Liouville transformation. In this paper, we will directly deal with problem , and hence only need to assume is bounded, positive and even with respect to for the discussions below.

Numerical methods for one dimensional inverse spectral problems and abound. We refer to [lowe1992recovery, rundell1992reconstruction1, rundell1992reconstruction] and the references therein. However, most methods rely heavily on the one-dimensional nature of the problem. In this paper, we give a novel numerical method for the reconstruction of in based on trace formulas. Moreover, we believe that this method can be adapted to some other, and even higher dimensional inverse spectral problems.

We need Fréchet differentiability of the forward map if we intend to use some gradient-based algorithm for solving inverse problems. Assume we have an elliptic operator on a compact manifold (with or without boundary) with a parameter , and we want to recover from the eigenvalues of . The differentiability of the map


is a very delicate issue. Perturbation theory of eigenvalues and eigenfunctions has been studied by Kato

[kato2013perturbation], but is quite inaccessible. Although this is not a problem for the inverse spectral problem related to due to the fact that all eigenvalues are simple, and many methods do not directly invert this map because of other concerns, one needs to bear in mind that it might be a serious issue for some higher dimensional or non-Hermitian problems. In this article, we will give an inversion scheme based on a different map, arising from trace formulas, of which the Fréchet differentiability is well guaranteed. For (1), the map is given as follows


To be more precise, we will actually invert the following map, for the sake of numerical stability,


where is a sequence of carefully chosen polynomials.

The rest of the paper is organized as follows. In Section 2, we derive a sequence of trace formulas that will be utilized for inversion. In Section 3, the inversion scheme is presented together with various implementation details. In Section 4, we present some numerical experiments and discuss the performance of the algorithm. In Section 5, we discuss the merits and limitations of the proposed algorithm.

2. Trace formulas

Our starting point for the new algorithm is the observation of the identity

which was mentioned in [gesztesy2011damped]. The above identity gives an explicit relation between the density and the eigenvalues. We will derive formulas for , in this section.

We denote the Dirichlet Laplacian as . Define such that satisfies


The operator is an integral operator, in the form

with the associated Schwartz kernel

It is clear that is continuous on .

Denote the operator as , where solves


Then we have , where is the multiplication operator , and the Schwartz kernel of is

Recall that a bounded linear operator over a separable Hilbert space is said to be of trace class if for some orthonormal bases of , the sum

is finite. In this case, the trace of is given by

Here is independent of the choice of orthonormal basis, and the Lidskii’s theorem states that if are the eigenvalues of , then

Also remember that a bounded operator is called Hilbert-Schmidt if is of trace class. We need the following propositions to continue. For more details, we refer to [Lax].

Proposition 1.

If and are Hilbert-Schmidt operator, then and are of trace class, and

Assume is a bounded domain in with smooth boundary.

Proposition 2.

If is an operator of trace class on , given by

for . If is continous in , then

Proposition 3.

If and are two Hilbert-Schmidt operators, with kernels and , then the kernel of is given by

Define the trace of as

Clearly . By above propositions, we have


We design an algorithm for the inverse spectral problem based on inverting the map

which gives a more explicit relation between the function and the data than the map . The following proposition establishes the equivalence of the data and . To be general, we allow some eigenvalues to be non-simple.

Proposition 4.

There is a one-to-one correspondence between and .


Assume the eigenvalues are ordered, i.e.,

Suppose the first eigenvalue has multiplicity , , then


We have already determined and its multiplicity . Then we exclude them from the traces and iterate. ∎

Remark 1.

One can easily check the above proof and see that actually the map is injective for any . For some (positive self-adjoint) second-order elliptic differential operator in higher dimensions, the Green’s function might not be continuous on the diagonal , and therefore we can not apply Proposition 2 to . However, we would not lose any information if we just use , .

3. Algorithm

3.1. Inversion scheme

We are not to directly invert the formula , where multiple integral needs to be done. Instead, we will take the advantage of semi-separability of Green’s function for to simplify the calculation. Assume are the eigenvalues and eigenfunctions of , where

Proposition 5 (Mercer’s Theorem).

Assume an operator is positive definite, and its associated kernel is a real-valued symmetric, continuous function of and . Then can be expanded in a uniformly convergent series

where and are the eigenvalues and normalized eigenfunctions of .

Applying Mercer’s theorem to whose kernel is , we have


It is easy to see from and that

Denote to be an infinite-dimensional matrix where

Clearly, we have


The Fréchet derivative of at is

If we use the approximation of under certain basis :


and denote . With a little abuse of notations, we use in place of in the following. Then

where .

If we use truncated Fourier cosine series,



and .

Now we have

For numerical implementation, we also need truncate to be a matrix.

In general, the Jacobian for the map

is super ill-posed. The fundamental reason is that it is inappropriate to measure the residual as

For a numerically more stable scheme, we will instead invert a different map


for properly chosen polynomials , with . The following proposition, immediately from , is the corner stone for the algorithm:

Proposition 6.

If is a polynomial with . Then


Here is some infinite-dimensional matrix, and its trace is just the sum of its diagonal entries. One can see that computing just needs basic matrix operations.

Remark 2.

We emphasize here that the choice of proper polynomials is crucial for the algorithm. We want the associated Jacobian to be less ill-posed. The monomials are actually very bad choices, and would lead to a failure of the algorithm.

We will use the (shifted) Chebyshev polynomials on given by recursive formulas:


Chebyshev polynomials have the property that

which is favorable to our need.

Denote , then we have the recursive formula


We will choose as the polynomials for our algorithm. Therefore, essentially we will be inverting the map

For the use of Chebyshev polynomials, we first choose a positive number slightly smaller than to rescale to , such that is slight smaller than . The matrix in the following is the rescaled one. For convenience of understanding, one can just assume the first eigenvalue is slightly larger than .

For the evaluation of the forward map and its Jacobian, we will use the following recursive relations:



The scheme of the algorithm is then summarized in Algorithm 1.

1:  precompute , traces
2:  given initial guess
3:  for  max number of iterations do
4:     form , , ,
7:     for  do
10:     end for
11:     for  do
13:        for  do
15:        end for
17:        for  do
19:        end for
20:     end for
21:     compute using Jacobi and residual
23:  end for
Algorithm 1 Inversion of trace formulas with Chebyshev polynomials

3.2. Details on the algorithm

Numerical trace computation. For the computation of for each , we also need to first use the similar recursive scheme to compute , and then do the summation over . One shall never first calculate and then compute using the explicit form of , although those coefficients can be precomputed. The main reasons are: 1) the coefficients for high order Chebyshev polynomials vary drasically in magnitude, so one will easily encounter overflow/underflow problems; 2) the sum for large will be dominated by the sum of the first few eigenvalues, rounding the ones following. This is a typical issue for numerically adding a large number and a small one.

Use of Chebyshev polynomials. If we use eigenvalues as the data, basically we need the highest degree of used Chebyshev polynomials

. The reason is that we need to make sure the chosen Chebyshev polynomials can discriminate all those eigenvalues, and we have the estimates

. We remark here that the use of ALL Chebyshev polynomials with degree lower than is totally superfluous. However, we do not know what are the “optimal” choices of Chebyshev polynomials. This is subject to further investigation, and we simply use all the polynomials for the numerical experiments presented later.

Traces using finite spectral data. Note that traces involve infinite sums

If only eigenvalues are known, we can approximate for using the asymptotics [rundell1992reconstruction1]

We first get an approximation of from the first eigenvalues, and then use

to approximate with . So basically, we are using “true” eigenvalues and “approximated eigenvalues”.

Multistep optimization. It is natural to start with a small number of basis functions, solve the optimization problem, and use the result as the initial guess for the next iteration with more basis functions. It is a good strategy to avoid local minimums. And thus we also do not need to worry too much about that we did not include the important constraint:

for optimization.

Figure 1 shows a two-step reconstruction for the function . We first use only eigenvalues and basis functions to do an approximation. Then we use the approximation as the initial guess for the second-step reconstruction with eigenvalues and basis functions.

Figure 1. Left: ; Right: . Notice that the reconstructed profile on the left is the initial guess for the reconstruction on the right.

Condition number. We generate Jacobian matrices with Chebyshev polynomials at . The condition numbers, for , are plotted in Figure 2, compared with the values of , and . We can see that the condition number is approximately of order . This is only slightly worse than the condition number for the method introduced in [lowe1992recovery]

. But again, we remark here that probably one can choose other polynomials to result in a better condition number.

Figure 2. Condition number of the Jacobian at

4. Numerical experiment

In this section we generate synthetic data to conduct some numerical experiments. To get “exact” eigenvalues, we use Chebyshev pseudospectral collocation discretization of the Laplacian operator , using Trefethen’s routine [trefethen2000spectral]:

  %differentiation matrix for [-1,1]
  [D,x] = cheb(N);
  %convert domain to [0,1]
  x = (x+1)/2;
  x = x’;
  % A is the Laplace operator
  A = D^2;
  %impose Dirichlet boundary condition
  A = A(2:end-1,2:end-1);

We use Chebyshev points to discretize Laplacian. At least percent of generated eigenvalues are “exact”. The number of reliable eigenvalues that can be computed via different numerical discretizations is discussed in [zhang2015many].

For all computations, Gauss-Newton is used as the optimization algorithm with tolerance set to . For every example, multistep optimization is taken. The initial guess for the first step is close to the mean of the true density .

The parameters in the algorithm are listed in Table 1 with the typical choices of their values. Some choices have been explained above, some will be explained below.

notation parameter typical choice of value
number of “true” eigenvalues used ———
number of basis functions
: size of truncated matrix
highest degree of Chebyshev polynomials
: number of “approximate” eigenvalues used
Table 1. Parameters for the algorithm

4.1. Example 1

For the first experiment, we recover the density function

which is a linear combination of the first Fourier cosine functions. We will use exactly Fourier cosine functions to do the reconstruction. We take the number of eigenvalues used to be equal to to maximize the amount of information utilized. We will use Chebyshev polynomials, and the matrix is truncated to be of size . We study how different values of affect the behavior of the algorithm. For the approximation of traces using only eigenvalues, we take . The numerical results are shown in Figure 3.

We list the first “exact” eigenvalues and the eigenvalues for the reconstructed densities in Table 2. One can see more eigenvalues for the reconstructed density are close enough to the “exact” ones with increasing . Therefore we need to take to do reconstruction.

(a) recovery with
(b) mismatch with
(c) recovery with
(d) mismatch with
(e) recovery with
(f) mismatch with
Figure 3. Recovery of with .
true eigenvalue mismatch eigenvalue mismatch eigenvalue mismatch
11.6001 11.5999 0.0003 11.6000 0.0002 11.6001 0.0000
43.5488 43.5010 0.0478 43.5483 0.0004 43.5486 0.0002
93.3770 93.1545 0.2225 93.3736 0.0034 93.3736 0.0034
148.5702 147.1728 1.3974 148.5736 -0.0034 148.5586 0.0117
253.724 249.2582 4.4657 252.6704 1.0535 253.8159 -0.0920
373.8342 351.9368 21.8974 371.0629 2.7713 374.2342 -0.4000
493.2769 463.8748 29.4020 487.0575 6.2194 493.0034 0.2735
Table 2. True eigenvalues and the eigenvalues for reconstructed densities.

4.2. Example 2

For the second experiment, we recover

using eigenvalues. We will use Fourier cosine functions to do the each reconstruction. We use Chebyshev polynomials, and the matrix is truncated to be of size (). The numerical results are shown in Figure 4.

We also compare the reconstructed density and the Fourier approximation (with the same number of basis functions) of the true density . Results are shown in Figure 5. We see that the misfit of the reconstruction with the Fourier approximation of the density is smaller than with the exact density.

(a) recovery using eigenvalues
(b) mismatch using eigenvalues
(c) recovery using eigenvalues
(d) mismatch using eigenvalues
(e) recovery using eigenvalues
(f) mismatch using eigenvalues
Figure 4. Recovery of using and eigenvalues.
(a) reconstruction and Fourier approximation with
(b) their difference
(c) reconstruction and Fourier approximation with
(d) their difference
(e) reconstruction and Fourier approximation with
(f) their difference
Figure 5. Recovery of and its difference with the Fourier approximation

4.3. Example 3

We will recover

using eigenvalues with noises. We set and . As is discussed in [lowe1992recovery], we can never expect to do the recovery with even a few percent of errors in the eigenvalues themselves. And for the standard inverse Sturm-Liouville problem , the appropriate measure of accuracy of the data are the numbers . For our experiment, we added to each eigenvalue

some noise that is normally distributed with zero mean and standard deviation

and . Figure 6 shows the reconstruction.

(a) recovery: noise standard dev = 0.05
(b) mismatch: noise standard dev = 0.05
(c) recovery: noise standard dev = 0.1
(d) mismatch: noise standard dev = 0.1
Figure 6. Recovery of with noises

4.4. Example 4

For the fourth example, we work with a discontinuous function

We first use eigenvalues and reconstruct with basis functions. The take and . The results are shown in Figure 7. The approximation of this discontinuous functions is not so close its Fourier approximation, but one can see from the Table 3 that the eigenvalues for the reconstructed density are actually closer to the “exact” eigenvalues than the Fourier approximation.

Then we recover, with basis functions, from eigenvalues. Still we take and . The results are also shown in Figure 7.

reconstruction with reconstruction with Fourier approximation with basis functions
true eigenvalue mismatch eigenvalue mismatch eigenvalue mismatch
9.2151 9.2151 0.0000 9.2151 0.0000 9.2200 -0.0049
38.2571 38.2571 0.0000 38.2571 0.0000 38.2867 -0.0296
86.0274 86.0274 0.0000 86.0274 0.0000 86.0367 -0.0092
150.8596 150.8596 0.0000 150.8596 0.0000 150.8942 -0.0346
236.9245 236.9248 -0.0002 236.9245 0.0001 237.1215 -0.1969
343.4206 343.4212 -0.0006 343.4206 0.0000 343.5498 -0.1292
464.5170 464.5183 -0.0013 464.5163 0.0007 464.5289 -0.0119
605.4567 605.4922 -0.0354 605.4575 -0.0008 605.8644 -0.4077
770.6854 770.2775 0.4079 770.6869 -0.0015 771.2058 -0.5203
9.5050E+02 ——— ——— 9.5050E+02 0.0073 9.5050E+02 -0.0177
1.1453E+03 ——— ——— 1.1453E+03 -0.0133 1.1458E+03 -0.4331
1.3668E+03 ——— ——— 1.3668E+03 0.0185 1.368E+03 -1.1854
1.6077E+03 ——— ——— 1.6076E+03 0.0083 1.608E+03 -0.3431
1.8581E+03 ——— ——— 1.8582E+03 -0.0893 1.8581E+03 -0.0233
2.1325E+03 ——— ——— 2.1321E+03 0.4194 2.1352E+03 -2.6319
Table 3. True eigenvalues vs eigenvalues for reconstructed and the Fourier approximation
(a) recovery with ,
(b) recovery with ,
(c) recovery with ,
(d) recovery with ,
Figure 7. Recovery of using and basis functions.

5. Discussion

The main purpose of this article is not to present an algorithm cooked to perfection, but rather to propose a general strategy for solving certain types of inverse spectral problems - that is to invert a sequence of trace formulas. We believe the same idea can be adopted to solve some other inverse spectral problems. In a subsequent paper, we will give a scheme for the inverse spectral problem for the (non self-adjoint) damped wave operator


using the same strategy.

We believe that the algorithm can be improved in various places. For example, 1) We de not know whether some polynomials other than Chebyshev would work better. Even if we are to use Chebyshev polynomials, our choice of Chebyshev polynomials are far from being “optimal”, and there must be plenty of techniques that can be adopted to reduce the computational cost, improve the accuracy, etc. We know for sure that the Chebyshev polynomials used in above numerical experiments are way more than necessary. The algorithm worked equally well with much few Chebyshev polynomials, for example, using only for Example 4; 2) for the representation of , one can choose different bases other than trigonometric functions , if the function can not be well approximated by Fourier cosine functions. This also shows the great flexibility and potential of the algorithm.

A practical inverse spectral problem usually involves very limited data (the low lying eigenvalues that are measurable), so one can not expect to recover too many unknowns. Numerically, we are inverting some map

where the dimension of and are both (at least intrinsically) quite small. However, the relation between the parameter to be recovered and the data is quite complicated. The amount of operations needed to compute might be astronomical, and vary in magnitude for different forward maps. The method presented in this article suggests a choice of the forward map , which only requires basic matrix operations.

Other boundary conditions. First consider the inverse spectral problem for


The eigenvalues for the above problem, together with the Dirichlet eigenvalues, should be sufficient to determine a general . Now, denote to be the operator defined as , where satisfies


The Schwartz kernel associated with operator is

We can use the eigenvalues and eigenfunctions of with the new boundary conditions,

to represent


As long as the boundary value problem is well-posed with imposed boundary conditions, one can derive a representation of Green’s function for the Laplacian like . However, it seems that the method does not work for Neumann problem