In this work we consider the following problem on a bounded domain in dimensions
for the outward unit normal to the boundary and . That is, is the solution to the Schrödinger equation with Neumann data corresponding to spectral parameter , which here we assume to be positive real. We apply Neumann boundary functions , with each and read the corresponding Dirichlet data on the boundary for multiple values of , and are interested in determining from this data. The reconstruction of from boundary measurements is known to be an ill-posed problem and, depending on the physical setup can result in reconstructions with very poor resolution . The map from to the data is highly nonlinear, and due to ill-posedness is difficult to invert unless regularization is added. As examples of literature on this subject, see ,,.
A relatively new class of approaches employ ideas from model reduction theory to do direct inversion using reduced order forward models ,,,, with impressive numerical results. These works have grown out of earlier work on spectrally matched finite difference grids , where special grid steps were chosen so that the numerical solution matched functionals of the solution at the boundary in the spectral domain. The spectrally matched grid steps depended very weakly on the medium, and this allowed for the direct extraction of the unknown coefficients . There were some extensions of this to higher dimensions , , , however, the grid itself was fundamentally one-dimensional, blocking the generation of truly higher dimensional direct inversion methods, except for special geometries. The more recent works use a more general reduced order model approach ,,,, .
Here we show how one can obtain spectrally matched finite difference operators with well chosen localized Galerkin basis functions. Using the basis functions, higher dimensional extensions are natural. This was essentially what was used in , in the time domain. In this work we consider the spectral domain, and we show explicitly how to view the reduced order models (ROMs) through the spectral Galerkin equivalence mentioned above. The Galerkin equivalence of spectrally matched grids was known on the boundary from , however the basis in which the Galerkin model is the same as the finite difference operator everywhere was not known.
A main advantage first observed in  is that, like the grid steps, the basis functions appear to depend only very weakly on the medium. It is this crucial observation that allows one to use the ROM for inversion. Furthermore, by viewing the reduced order model as embedded into the continuous problem and using the reference medium basis functions, good approximations of the internal solutions can be produced from the boundary data. We will demonstrate this with numerical examples in one and two dimensions.
The paper is organized as follows. In Section 2 we state the reconstruction problem in general. In Section 3 we consider a one dimensional version with one source. In Section 3.1 we follow the Loewner framework  and describe how one obtains a Galerkin model from the data directly, in Section 3.2 we describe the orthogonalization and prove the equivalence everywhere with spectrally matched grids. In Section 3.3 we present one dimensional numerical examples along with the internal solution generation/reconstruction algorithm. In Section 4 we describe the same approach in higher dimensions; including the generation of the ROM from the data in Section 4.1 and the orthogonalization procedure in Section 4.2. Finally in Section 4.3 we present two dimensional numerical examples and the internal solution generation/reconstruction algorithm, and Section 5 contains some concluding remarks.
2. Problem statement.
Consider the problem (1) where we apply Neumann boundary functions , with each and denote by the solution to (1) corresponding to spectral value and source . We read data in the form of the matrix valued transfer function as depending on the spectral parameter
If, for example, each Neumann function is an approximate delta function corresponding to a source/receiver location, for , the transfer function corresponds to reading the solution coming from the th ”source” at the th ”receiver”. We take spectral values , and define
the solution to (1) at with . Now let us assume that we are in possession of boundary data of the form
where the derivative here is with respect to . The inverse problem is to determine from the data
Such data can be obtained from the time domain via a Fourier transform.
As we will see below, from this data one can find a ROM which matches the data exactly. However, it is not at all crucial that the data be of the Hermite form (2). In fact, matching point values of at spectral points leads to a ROM with even better approximation properties and still yields a stable system , . Other forms of spectral data are also possible ,. However, we consider only the Hermite data (2) in this work for simplicity.
3. One dimensional problems.
3.1. One dimensional Galerkin model from spectral data.
where we read the data and for as would be natural coming from the time domain. ( ln principle could be anywhere in
away from the Neumann eigenvalues of the Schrödinger operator ). One sees easily that (3) has the variational form: Find such that
for all . We note that by Sobolev embedding and the trace theorem the values of are well defined for all test functions in . For any given value of , let us define our true spectral domain medium response by
where is the solution to (4).
Consider the exact solutions to (4) where
These form the subspace
and we can consider the Galerkin solution to the variational problem: Find such that
for any . Searching for the unknown coefficients for the solution and by setting , one obtains the system
, the vector, and are the mass and stiffness matrices given by
Using the variational formulation (5) with and (recall is the exact solution for ), we obtain that
Using this, if we are in possession of data , we can actually directly reconstruct and . By reversing and and subtracting, for we have
where we have used the symmetry of and . Imagine now that one were to have data corresponding to solution for some spectral point close to . Then by the above one has the formula
Taking , we have that , so that
Multiplying (7) by , reversing and and subtracting, we get
Again by taking some spectral point close to and letting one obtains
Hence the formulas (8),(9),(10),(11), well known in the model reduction community , provide the mass and stiffness matrices directly from the data. Similar formulas can be derived for other forms of spectral data. Together and provide a reduced order model for (3).
The solution to (5) also has the property that at , as a function of it matches the data
exactly. We show this in the following Lemma, which was previously known .
First we show that the Galerkin solution yields a response at of the form (13) for all . Here we use the continuous inner product . Decomposing the solution into an orthonormal basis of Galerkin eigenpairs ,
The fact that is an eigenpair means that
which from the variational formulation for gives
Solving for we have
which from (14) yields
Hence has the same form as in (13), with positive residues and negative poles. Recall that is the rational Hermite interpolant to at the points . Now, since the trial space for the Galerkin solution contains the exact solutions for , will be the exact solution at those spectral points, which means that at ,
We claim also matches the derivatives at the spectral points as a function of . The Galerkin solution for some near can be used as a test function for , so that one has
We can also use the variational formulation for this same Galerkin solution for with as a test function to obtain
and by subtracting them we have
Taking the limit as one obtains
from (9). Hence is the rational Hermite interpolant to
at the given spectral points, and so by uniqueness of this Padé approximation,
for all . ∎
Spectral Galerkin models such as the above are difficult to work with since they tend to yield full matrices. In particular for inverse problems, it is not obvious how one may extract the coefficients, since we wouldn’t know the basis functions internally from the data. It is for these reasons we follow along the lines of , and do a Lanczos orthogonalization to construct a new orthonormal basis for in which the operator becomes tridiagonal. In the new basis, is the identity from the orthonormality.
3.2. Lanzcos orthogonalization and equivalence with spectrally matched finite difference grids.
The original idea of spectrally matched grids  was that, given receiver data in the spectral domain, choose a rational approximation for this data, then find the three point finite difference stencil whose forward solution at the receiver matches exactly this rational approximation. This yielded a very special nonuniform grid and spectral convergence at the receiver. Higher order convergence for the forward model was sacrificed elsewhere in domain, but this was irrelevant for inversion. It was also shown at the time that the receiver response of the finite difference scheme was equivalent to that of a spectral Galerkin method, for various types of spectral data . Just as above we saw how to generate a Galerkin model from data, one can generate a finite difference model from the same data.
Suppose we read the data (12) as before. Forgetting for now about the Galerkin method, there is a unique rational approximation to of the form (13) with positive residues and negative poles which gives this Hermite interpolant to , matching the data (12). From these positive residues and negative poles, by using what is essentially the Lanczos algorithm, one can find a continued fraction form for ,
Then from there one can extract which define a three-point staggered difference scheme with tridiagonal matrix for which the approximated solution at is exactly . That is, the rational approximation uniquely determines positive , such that solving the finite difference scheme
Note that and are ghost points only (and ghost grid steps) and the nonzero Neumann condition on the left yields a nonzero right hand side in the first component. That is, this yields an dimensional matrix system of the form
for the unknown , with entries of given by (15) in terms of the and .
When , the are primary and dual finite difference grid steps. For nonzero , finite difference scheme (15) can be transformed to a discrete Schrödinger form with a discrete Liouville transform . For simplicity, we won’t do that here, since we don’t use the finite difference framework for reconstructions.
Consider again approximating the solution to (3) by a Galerkin method with the finite dimensional subspace , generated by the exact solutions at the spectral points . Since the finite difference grid does not easily extend to higher dimensions, we would rather work with Galerkin systems. Recall the Galerkin system (6)
where and the vector contains the coefficients for solution
Unfortunately, unlike the finite difference model, the stiffness and mass matrices and are full. However, one can in fact find a new basis for in which the Galerkin system is tridiagonal, and exactly everywhere the same as the symmetrization of the finite difference system (15). To do this we will do Lanczos orthogonalization. Define to be the projection of the delta function onto , that is, the unique element of which satisfies
identified with the coefficient vector . Note that Consider now the basis where we let and
and we orthogonalize using Gram-Schmidt with respect to the mass matrix inner product
This will yield continuous orthogonality of the basis functions. Note that is symmetric with respect to this inner product. We get new orthogonalized basis for :
Due to the orthogonalization, the new basis is somewhat localized, more refined near zero and coarsening out towards one, as spectrally matched finite difference grid steps do. See an example in Figure 1 for . Furthermore, is tridiagonal in the new basis, and since becomes the identity, this means that is also tridiagonal in the new basis.
We first symmetrize (15) by setting
and and then multiplying each row by in the resulting equations for the . The resulting system is in the form
where is now symmetric. We recall that for the given impedance of the form (13) (now multiplied by ), the symmetric tridiagonal is uniquely determined.
Consider now the system (17) in the new basis. The mass matrix will be identity due to orthogonality. The stiffness matrix will be tridiagonal due to the fact that this is a Lanczos process with polynomials in , ( becomes tridiagonal and therefore ). Furthermore, since is the projection of the delta function, it will be nonzero at zero, but from orthogonality of the new basis functions we have that
for all . Hence the right hand side is nonzero only in the first component and is therefore equal to . This means that the system (17) is of the form
for all . By decomposing into the normalized eigenpairs of as was done in the proof of Lemma 3.1 except now using the standard Euclidean inner product in , one can calculate that
for the eigenvalues and
the squares of the first components of the normalized eigenvectors of. Similarly
where the eigenvalues and the squares of the first components of the normalized eigenvectors of . By the above impedance equality for all and the normalization of the residues, one must have that the poles and residues are equal (and that the multipliers from right hand sides are the same). Therefore and are the same by the uniqueness of the inverse eigenvalue problem, and , from which the result follows. ∎
From the above we also found that
which coincides with being a projection of the delta function.
That is, the solution components of the difference scheme (15) can be interpreted as coefficients of the spectrally converging Galerkin solution with respect to the orthogonal basis (16). See Figure 2 for an example of an orthogonalized basis with its equivalent staggered finite difference grid. Note that the masses of the basis functions are concentrated between the primary grid steps and the next dual grid step, and they spread out in the same way that the grid steps coarsen. In Figure 3, we normalize each function in the basis to be equal to at its corresponding primary grid point, and plot it against the nodal basis for the same space. (The nodal basis is the unique basis of satisfying .) Note that our orthogonal basis resembles the nodal one, but is less oscillatory away from the corresponding grid point. In what follows below, we propose an inversion method using the Lanczos orthogonalized basis, which makes no use of the finite difference grid steps, and is easily generalizable to other geometries.
3.3. Internal data generation and inversion in one dimension.
as a perturbation of the corresponding reference problem
Here we use the reference medium , however, any known reference medium should work the same. We note also that one can have either a Dirichlet or Neumann condition on the right endpoint . The one dimensional numerical experiments below were done with a Dirichlet condition, a Neumann condition yielded similar results.
Read data (here synthetically generated) , for some positive for the perturbed problem (18), and compute all of the corresponding solutions for for the reference problem.
Compute projections of the delta function at zero onto by solving for in
where components . These are the coefficients of projected in the basis for . Similarly, we calculate by solving
where is the reference data. This means that are the coefficients of the projected delta function in the reference basis for the space .
Set , and note that is symmetric with respect to the inner product , which is same the inner product on the corresponding continuous functions. Perform Lanczos orthogonalization with respect to this inner product on the basis
and let , be the Galerkin stiffness and mass matrices in this new basis . Note that is tridiagonal.
Similarly, set and perform Lanczos orthogonalization with respect to the inner product on the basis
to obtain the corresponding orthognalized basis for .
Now choose any spectral value and solve , so that is the Galerkin approximation in to the solution of (18) . Since we don’t know from the data, approximate by
Compute ; or reserve for another use.
In summary, the spectral data gives the Galerkin system ROM for (3) corresponding to spectral snapshots. Simultaneously, we can do the same for a reference medium. Lanczos orthogonalization localizes the spectral snapshots, forming a basis which depends very weakly on the medium. In Figure 4, we show an example of the orthogonalized basis functions for a piecewise cubic varying from up to and back down to . Here they are plotted against the orthogonalized basis functions for the reference medium, and appear indistinguishable. Therefore, although we do not have the internal spectral snapshots for the true medium, in the localized basis they are very close to those of the reference medium. For any given now, by solving the Galerkin system, we can get the coefficients, and this yields a good approximation to internal data. See Figure 5 for an example of an internal solution. One possibility is to use the internal solution to do inversion- for example by calculating . Note that for positive , is never zero. We try this simple approach in Figure 6 and Figure 7.
4. Higher dimensional problems.
4.1. Galerkin model from spectral data in higher dimensions.
for all . Generating a Galerkin subspace with these exact solutions
and using them as test functions, we have that
for all and . That is,
where and are the stiffness and mass matrices respectively. From this and the same derivation as in one dimension, one obtains them directly from the data
4.2. Block Lanczos orthogonalization and generation of the block tridiagonal reduced order model.
Note the above and can be viewed as matrices with entries that are blocks. First we consider our data functional which is defined by
Similarly to what was done in one dimension, we find the projection of each component of onto . That is, we look for such that
That is, we identify with its coefficients in the basis for ;
which again can be obtained directly by solving the matrix system of blocks
We now use this as our starting vector for block Lanczos orthogonalization with respect to the basis generated by powers of the matrix :
and we orthogonalize using Gram-Schmidt with respect to the mass matrix inner product
This will yield again continuous orthogonality of the basis functions. Note that is symmetric with respect to this inner product and will be block tridiagonal in the new basis for :
In this new basis, will be identity and will be block tridiagonal, yielding a sparse, spectrally converging ROM generated entirely from the data.
4.3. Embedding into the continuous problem and generation of internal data.
We now describe the algorithm to generate the internal solutions, which is simply a generalization of that from Section 2.3. Here we assume we know or can compute solutions to the reference problem
Read data (again here synthetically generated)
corresponding to some positive for and sources/receivers for the perturbed problem (1).
Compute all of the corresponding solutions for for the reference problem (27) and the corresponding reference data sets and .
Compute projection of the vector of functionals onto
and onto the corresponding reference Galerkin space
that is, we compute the coefficient vectors and by solving the systems
Set , and perform block Lanczos orthogonalization with respect to the inner product using
and let , be the Galerkin stiffness and mass matrices in this new basis for . Note that is block tridiagonal with blocks.
Similarly, set and perform Lanczos orthogonalization with respect to the inner product on the basis
to obtain the corresponding orthogonalized basis for .
Now choose any spectral value and solve the Galerkin system in the new basis , so that for each ,