# A model reduction approach for inverse problems with operator valued data

We study the efficient numerical solution of linear inverse problems with operator valued data which arise, e.g., in seismic exploration, inverse scattering, or tomographic imaging. The high-dimensionality of the data space implies extremely high computational cost already for the evaluation of the forward operator, which makes a numerical solution of the inverse problem, e.g., by iterative regularization methods, practically infeasible. To overcome this obstacle, we develop a novel model reduction approach that takes advantage of the underlying tensor product structure of the problem and which allows to obtain low-dimensional certified reduced order models of quasi-optimal rank. A complete analysis of the proposed model reduction approach is given in a functional analytic setting and the efficient numerical construction of the reduced order models as well as of their application for the numerical solution of the inverse problem is discussed. In summary, the setup of a low-rank approximation can be achieved in an offline stage at essentially the same cost as a single evaluation of the forward operator, while the actual solution of the inverse problem in the online phase can be done with extremely high efficiency. The theoretical results are illustrated by application to a typical model problem in fluorescence optical tomography.

There are no comments yet.

## Authors

• 8 publications
• 11 publications
• 4 publications
• ### Improve Unscented Kalman Inversion With Low-Rank Approximation and Reduced-Order Model

The unscented Kalman inversion (UKI) presented in [1] is a general deriv...
02/21/2021 ∙ by Daniel Z. Huang, et al. ∙ 0

• ### Bayesian inversion for electromyography using low-rank tensor formats

The reconstruction of the structure of biological tissue using electromy...
09/06/2020 ∙ by Anna Rörich, et al. ∙ 0

• ### Bridging and Improving Theoretical and Computational Electric Impedance Tomography via Data Completion

In computational PDE-based inverse problems, a finite amount of data is ...
05/02/2021 ∙ by Tan Bui-Thanh, et al. ∙ 0

• ### Randomization for the Efficient Computation of Parametric Reduced Order Models for Inversion

Nonlinear parametric inverse problems appear in many applications. Here,...
07/12/2020 ∙ by Selin Aslan, et al. ∙ 0

• ### On Learned Operator Correction

We discuss the possibility to learn a data-driven explicit model correct...
05/14/2020 ∙ by Sebastian Lunz, et al. ∙ 0

• ### Some application examples of minimization based formulations of inverse problems and their regularization

In this paper we extend a recent idea of formulating and regularizing in...
04/27/2020 ∙ by Kha Van Huynh, et al. ∙ 0

• ### Structured random sketching for PDE inverse problems

For an overdetermined system Ax≈b with A and b given, the least-square (...
09/25/2019 ∙ by Ke Chen, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

We consider the efficient numerical solution of linear inverse problems with operator valued data modeled by abstract operator equations

 Tc=Mδ. (1)

We assume that , representing the possibly perturbed measurements, is a linear operator of Hilbert-Schmidt class between Hilbert spaces and , the dual of . We further assume that the forward operator is linear and compact, and admits a factorization of the form

 Tc=V′D(c)U. (2)

Problems of this kind arise, for instance, as mathematical models for tomographic applications [1, 25] or inverse scattering problems [6, 13], and as linearizations of related nonlinear problems, see e.g., [8, 20, 29] or [20] and the references given there. In such applications, typically models the propagation of excitation fields generated by the sources, describes the interaction with the medium to be probed, and models the emitted fields which can be recorded by the detectors. In the following, we briefly outline our basic approach towards the numerical solution of Eq. 1Eq. 2 and report about related work in the literature.

### 1.1 Regularized inversion

Due to the special functional analytic setting, the inverse problem Eq. 1Eq. 2 amounts to an ill-posed linear operator equation in Hilbert spaces and standard regularization theory can be applied for its stable solution [2, 9]. Following the usual arguments, we assume that is a perturbed version of the exact data and that a bound on the measurement noise

 ∥M−Mδ∥HS(Y,Z′)≤δ (3)

is available. We further denote by the minimum norm solution of Eq. 1 with replaced by . A stable approximation for the solution can then be obtained by the regularized inversion of Eq. 1, e.g., via spectral regularization methods

 cδα =gα(T⋆T)T⋆Mδ=T⋆gα(TT⋆)Mδ. (4)

Here denotes the adjoint of the operator . A typical choice of the filter function in this context is , which leads to Tikhonov regularization ; we refer to [2, 9] for details.

For the actual computation of the regularized solution , a sufficiently accurate finite dimensional approximation of the operator is required, which is usually obtained by some discretization procedure; in the language of model order reduction, this is called the truth or high-fidelity approximation [3, 27]. In the following discussion, we will not distinguish between infinite dimensional operators and their truth approximations. We thus assume that and , with dimensions typically very large, which is required to guarantee that the truth approximation is sufficiently accurate. We may then identify

with a vector in

, with a matrix in , and with a -tensor in or a matrix in .

### 1.2 Related work

The high dimensionality of the problem poses severe challenges for the numerical solution of the inverse problem Eq. 1Eq. 2 and different model reduction approaches have been proposed to reduce the computational complexity. These typically rely on the construction of certain low-rank approximations for the forward operator or its adjoint , e.g., by truncated singular value decomposition. For problems with regular geometries and constant coefficients, fast analytic singular value decompositions of linear operators have been used in [22] based on Fourier techniques. In general, the full assembly and decomposition of is, however, computationally prohibitive for many applications. Krylov subspace methods [16, 30] and randomized algorithms [14, 23] then provide alternatives that allow to construct approximate singular value decompositions using only a moderate number of evaluations of and its adjoint . By combining randomized singular value decompositions for subproblems associated to a single frequency in a recursive manner, approximate singular value decompositions have been constructed in [5] in the context of inverse medium problems. A particular strategy towards dimension reduction consists in reducing synthetically the number of sources. Such simultaneous or encoded sources have been used in various applications with a large number of excitations and detectors, e.g., geophysics [15, 19] and tomography [31]; see [28] for further references.

In a recent work [21], motivated by [20], the forward operator is assumed to be the Khatri-Rao product of the matrices corresponding to the adjoint operators and in our setting; this induces a similar structure to Eq. 2 if amounts to a diagonal matrix with on its diagonal. The Khatri-Rao product structure allows the efficient evaluation of , required for the solution of the inverse problem, using pre-computed low-rank approximations for and ; see also [18] for a survey on tensor decompositions. The computational cost of the proposed reconstruction algorithms in [21] is still rather high and may be prohibitive for problems with distributed parameters. For a survey of model reduction techniques that aim to reduce the dimension of the parameter space and to accelerate the solution of the computational models let us refer to [3, 27].

The approach developed in this paper aims at systematically constructing approximations for the operator with a quasi-optimal low rank comparable to that of the truncated singular value decomposition while at the same time allowing for a more efficient construction and also guaranteeing provable approximation error bounds. After model reduction, even very high dimensional inverse problem can be solved in parts of a second. In the following, we outline our approach in more detail.

### 1.3 Model reduction

A possible and rather general strategy towards dimension reduction, which we also use in this paper, amounts to projection in data space

 TN=QNT, (5)

where is chosen as some orthogonal projection with finite rank , which is the dimension of the range of . Since we assume that is compact, we can approximate it by finite rank operators, i.e., we can choose sufficiently large such that

 ∥QNT−T∥X→HS(Y,Z′)≤δ. (6)

Note that typically , where , are the dimensions of the truth approximation used for the computations. Let us recall that the approximation of minimal rank , satisfying Eq. 6, is obtained by truncated singular value decomposition of the operator , which will serve as the benchmark in the following discussion.

For the stable and efficient numerical solution of the inverse problem Eq. 1Eq. 2, we may then consider the low-dimensional regularized approximation

 (7)

As shown in [24], the low-rank approximation defined in Eq. 7 has essentially the same quality as the infinite dimensional approximation , as long as the perturbation bound Eq. 6 can be guaranteed. In the sequel, we therefore focus on the numerical realization of Eq. 7, which can be roughly divided into the following two stages:

• Setup of the approximations , , and . This compute intensive part can be done in an offline stage and the constructed approximations can be used for repeated solution of the inverse problem Eq. 1 for multiple data.

• Computation of the regularized solution Eq. 7. This online stage, which is relevant for the actual solution of Eq. 1, requires of the following three steps:

step computations complexity memory
compression
analysis
synthesis

For the complexity and memory estimates above, we assumed that

is the truth approximation obtained after discretization. Let us note that the analysis step is completely independent of the large system dimension of the truth approximation and therefore the compression and synthesis step are the compute intensive parts in the online stage. If , which is the typical situation [5, 21], then the data compression turns out to be the most compute and memory expensive step.

### 1.4 Tensor product compression

To further reduce the memory cost of the data compression, we may exploit the particular structure Eq. 2 of the forward operator reflected in the tensor product structure of the measurement space . We define a tensor product projection operator

 QK,KMδ=Q′K,VMδQK,U (8)

via two separate projections , of rank in the spaces and of sources and detectors. After defining and , which are again operators of rank , we obtain a tensor product approximation

 TK,Kc=QK,KTc=V′KD(c)UK (9)

of the forward operator whose rank is . In the spirit of [15, 19], the columns of the operators and could be interpreted as optimal sources and detectors; their choice and construction is also strongly related to optimal experiment design [26].

With similar arguments as before, we may choose sufficiently large such that

 ∥TK,K−T∥X→HS(Y,Z′)≤δ, (10)

which yields a corresponding low-dimensional approximation for the regularized solution of Eq. 1 with still optimal approximation properties [24]. Further note that the tensor product structure of allows to compute the projected data

 MδK,K=(Q′K,VMδ)QK,U

in two steps and that the first projection can be applied already during recording of the data. Simultaneous access to the full data is therefore never required and the memory cost of data recording and compression is thereby reduced to . If only is required in Eq. 10, then this is substantially smaller than the memory cost for computing with a generic projection which does not take advantage of the underlying tensor product structure. Similar projections and of and were also used in [21] to speed-up the data compression step.

### 1.5 Recompression

One major disadvantage coming with the tensor product projection is its still relatively high rank , which is typically much larger than the optimal rank achievable by truncated singular value decomposition. To overcome this, we employ another compression of , giving rise to a projection

 QN=PNQK,K (11)

with rank that can be proven to be virtually the same as the optimal rank of the truncated singular value decomposition. In this way, we can combine the advantages of an almost optimal rank approximation and the tensor product pre-compression of the data. It turns out that this two-step construction is also beneficial for the computation of the projections , , and and the operators and in the offline phase. Our analysis reveals that actually only a hyperbolic cross approximation [7] for the tensor product approximation is required for the recompression, which substantially improves the computational complexity.

### 1.6 Main contributions and outline of the paper

We will present a complete analysis of the proposed model reduction approach in an infinite-dimensional functional analytic setting. Our results thus become independent of the underlying truth approximation, which is only used for the actual computations, and we obtain a certified reduced order model with guaranteed error bounds. In addition, we demonstrate that these models can be constructed at substantially lower cost than typical low-rank approximations obtained by approximate singular value decompositions.

The remainder of the manuscript is organized as follows: In Section 2, we discuss in detail the construction of for problems of the form Eq. 2. Under mild assumptions on the mapping properties of the operators , , and , we show how to define appropriate projections , , and in order to rigorously establish the approximation property Eq. 6. In particular, we show how to construct the second projection in a post processing step that only requires access to the tensor product approximation or its hyperbolic cross approximation , but not to the full operator . To illustrate the applicability of our theoretical results, we discuss in Section 3 a particular example stemming from fluorescence diffuse optical tomography. An appropriate choice of function spaces allows us to verify all conditions required for the analysis of our approach. In Section 4, we report in detail about numerical tests, in which we demonstrate the computational efficiency of the model reduction approach and the resulting numerical solution of the inverse problems.

## 2 Analysis of the model reduction approach

We will start with introducing our basic notation and then provide a complete analysis of the data compression and model reduction approach outlined in the introduction.

### 2.1 Notation

Runction spaces will be denoted by and assumed to be separable Hilbert spaces with scalar product and norm . By we denote the dual of , i.e., the space of bounded linear functionals on , and by the corresponding duality product. Furthermore, denotes the Banach space of linear operators with norm . We write for the range of the operator and define . By and we denote the dual and the adjoint of a bounded linear operator defined, respectively, for all , , and by

 ⟨S′b′,a⟩A′×A=⟨b′,Sa⟩B′×Band(S⋆b,a)B =(b,Sa)A. (12)

The two operators and are directly related by Riesz-isomorphisms. Let us further recall that any compact linear operator has a singular value decomposition, i.e., a countable system such that

 Sa=∑k≥1(a,ak)Aσkbk, (13)

with singular values and and denoting orthonormal basis for and , respectively. Also note that and . Moreover, by the Courant-Fisher min-max principle [12], the st singular value can be characterized by

 σk+1=minAkmaxa∈A⊥k∥Sa∥B/∥a∥A, (14)

where the denote -dimensional subspaces of . Hence every linear compact operator can be approximated by truncated singular value decompositions

 SKa=∑k≤K(a,ak)Aσkbk, (15)

with error . Conversely, any linear bounded operator that can be approximated in norm by finite-rank operators is necessarily compact.

We further denote by the Hilbert-Schmidt class of compact linear operators whose singular values are square summable. Note that is a Hilbert space equipped with the scalar product , where is an orthonormal basis of . Moreover, the scalar product and the associated norm are independent of the choice of this basis. Let us mention the following elementary results, which will be used several times later on.

###### Lemma 1

(a) Let . Then there exists a sequence of linear operators of rank , such that .

(b) Let , be two linear bounded operators and at least one of them Hilbert-Schmidt. Then the composition is Hilbert-Schmidt and

 ∥RS∥HS(A,C) ≤∥R∥L(B,C)∥S∥HS(A,B),or ∥RS∥HS(A,C) ≤∥R∥HS(B,C)∥S∥L(A,B).

Here and below, we use to express with some constant that is irrelevant in the context, and we write when and .

For convenience of the reader, we provide a short proof of these assertions.

###### Proof

The assumption implies that is compact with square summable singular values, and hence . The truncated singular value decomposition then satisfies which yields (a). After choosing an orthonormal basis , we can write

 ∥RS∥2HS(A,C) =∑k∥RSak∥2C ≤∥R∥2L(B,C)∑k≥1∥Sak∥2B=∥R∥2L(B,C)∥S∥2HS(A,B)

which implies the first inequality of assertion (b). The second inequality follows from the same arguments applied to the adjoint and noting that the respective norms of an operator and its adjoint are the same.

### 2.2 Preliminaries and basic assumptions

We now introduce in more detail the functional analytic setting for the inverse problem Eq. 1 used for our considerations. We assume that the operators , , appearing in definition Eq. 2 satisfy Let , , and . Following our convention, all function spaces appearing in these conditions, except the space , are separable Hilbert spaces. We can now prove the following assertions.

###### Lemma 2

Let Section 2.2 be valid. Then defines a bounded linear operator and, additionally, is compact.

###### Proof

Linearity of is clear by construction and the linearity of , , and . Now let denote an orthonormal basis of and let be arbitrary. Then

 ∥Tc∥2HS(Y,Z′) =∑k≥1∥V′D(c)Uyk∥2Z′≤∥V′D(c)∥2L(U,Z′)∑k≥1∥Uyk∥2U ≤∥V′∥2L(V′,Z′)∥D∥2L(X→L(U,V′))∥c∥2X∥U∥2HS(Y,U),

where we used Lemma 1 in the second step, and the boundedness of the operators in the third. Since , we obtain

 ∥Tc∥HS(Y,Z′)≤∥U∥HS(Y,U)∥V∥HS(Z,V)∥D∥L(X,L(U,V′))∥c∥X

for all , which shows that is bounded. Using Lemma 1(a), we can further approximate and by operators , of rank , such that

 ∥U−UK∥L(Y,U)≲K−1/2and∥V−VK∥L(Z,V)≲K−1/2, (16)

and we can define an operator by , which defines an approximation of of rank . From Lemma 1(b), we infer that

 ≤(∥V′−V′K∥L(V′,Z′)∥U∥HS(Y,U)+∥V′∥HS(V′,Z′)∥U−UK∥L(Y,U))∥D∥L(X,L(U,V′)).

Using Section 2.2 and the bounds Eq. 16, we thus conclude that can be approximated uniformly by finite-rank operators, and hence is compact.

### 2.3 Tensor product approximation

As an immediate consequence of the arguments used in the previous theorem, we obtain the following approximation result.

###### Lemma 3

Let Section 2.2 hold. Then for any there exists with and rank approximations and such that

 ∥U−UK∥L(Y,U)≤δand∥V−VK∥L(Z,V)≤δ. (17)

Here and are orthogonal projections on and , respectively. Furthermore, the operator defined by has rank and satisfies

 (18)

If the singular values of and satisfy for some , then the assertions hold with , and consequently .

###### Remark 1

The operators and can be obtained by truncated singular value decomposition of and , and and then are the projections onto the spaces spanned by the first right singular vectors of and , respectively. The assertions of the lemma further imply in particular that the singular values of decay at least like with ; the latter follows from the fact the and are Hilbert-Schmidt, and thus their singular values are square summable.

### Hyperbolic cross approximation

Any operator for with will be called a -approximation for in the following. Note that is a -approximation of rank , while the -approximation of minimal rank is obtained by truncated singular value decomposition Eq. 15. In particular, this implies that . We will illustrate now, that the converse statement is in general not true, i.e., the tensor product approximation may have substantially higher rank than required for the -approximation property.

###### Lemma 4

Let and for some and . Then we have , and for any , we can find with and an approximation of rank , such that

 ∥T−TN∥L(X,HS(Y,Z′))≲δ. (19)

###### Proof

Let denote the singular systems for and , respectively. We now show that the hyperbolic cross approximation [7]

 TNc =∑k≥1∑Nkℓ=1σℓ,Uσk,V′(⋅,aℓ,U)Y⟨D(c)bℓ,U,ak,V′⟩V′×Vbk,V′,

with the choice , , and has the required properties. By counting, one can verify that , since by construction is summable. Furthermore, we can bound

 =∑m≥1∥(Tc−TNc)am,U∥2V′ =∑k≥1σ2k,V′∣∣∑ℓ≥Nk+1σℓ,U⟨D(c)bℓ,U,ak,V′⟩V′×V∣∣2 ≤∑k≥1σ2k,V′σ2Nk∥D(c)∥2L(U,V′)∥ak,V′∥2V′.

By observing that , , and and by using the decay properties of the singular values, we obtain

 ∥T−TN∥2L(X,HS(U,V′)) ≲∑k≥1k−2α+2β(1+ϵ)N−2β≲δ2.

In the last step, we used the fact that and , which follows immediately from the construction.

###### Remark 2

Comparing the results of Lemmas 4 and 3, we expect to obtain a tensor product approximation of rank while the hyperbolic cross approximation and consequently also the truncated singular value decomposition of the same accuracy only have rank , which may be substantially smaller for . Hence the rank of the tensor product approximation will, in general, not be of optimal order.

### 2.4 Quasi-optimal low-rank approximation via recompression

We will now show that a further compression of the tensor product approximation allows to obtain a low-rank approximation with quasi-optimal rank.

###### Lemma 5

Let and denote a -approximation for according to Lemma 3. Further assume that the singular values of decay like

 σk,T≲k−β,β>0. (20)

Then there exists an orthogonal projection on with rank and

 ∥T−PNTK,K∥L(X,HS(Y,Z′))≲δ. (21)

Moreover, can be constructed using only knowledge of the approximation .

###### Proof

For ease of notation, we use to abbreviate the corresponding operator norm. Now let denote the truncated singular value decomposition of with rank . Using assumption Eq. 20 we obtain that

 ∥PN,TT−T∥=σN+1,T≲δ.

Furthermore, let be the truncated singular value decomposition of with the same rank as above. Then by the triangle inequality

 ∥T−PNTK,K∥ ≤∥T−TK,K∥+∥TK,K−PNTK,K∥.

The first term can be bounded using the -approximation property of . From the min-max characterization of the singular values Eq. 14, we know that the truncated singular value decomposition yields the best-approximation in the set of bounded linear operators with rank ; also known as the Eckart-Young-Mirsky theorem. Hence the second term can be further estimated by

 ∥(I−PN)TK,K∥ ≤∥(I−PN,T)TK,K∥ ≤∥(I−PN,T)T∥+∥(I−PN,T)(T−TK,K)∥ ≤σN+1,T+∥T−TK,K∥≲δ.

Here we used that and the -approximation property of . The result then follows by combination of the two estimates derived above.

###### Remark 3

In the previous lemma, we could use instead of also any other -approximation of the operator , e.g., the hyperbolic cross approximation constructed in Lemma 4; the proof carries over verbatim. In fact, the lemma relies on a well-known result from perturbation theory [17], viz., the singular values of the -approximation are in a -neighborhood of the singular values of .

### 2.5 Summary

Let us briefly summarize the main observations and results of this section. We constructed a certified reduced order model , i.e., -approximation, for the operator with quasi-optimal rank comparable to that of truncated singular value decomposition. The given construction is based on certified low-rank approximations , for the operators and which can be computed more efficiently than a low-rank approximation for the full operator . The resulting tensor product approximation can then be further compressed by truncated singular value decomposition yielding the quasi-optimal low-rank approximation .

As can be seen from the proof of Lemma 4 and Remark 3, the tensor product approximation is not really needed but can be replaced by its hyperbolic cross approximation when computing the final approximation . This allows to substantially improve the computational complexity of the offline phase and is a key ingredient for the efficient realization of our model reduction approach.

The analysis in this section is done in abstract spaces and applies verbatim to infinite-dimensional operators as well as to their finite-dimensional truth approximations obtained after discretization. As a consequence, the computational results, e.g., the rank and of the approximations, can be expected to be essentially independent of the actual truth approximation used for computations.

## 3 Fluorescence optical tomography

In order to illustrate the viability of the theoretical results derived in the previous section, we now consider in some detail a typical application arising in medical imaging.

### 3.1 Model equations

Fluorescence optical tomography aims at retrieving information about the concentration of a fluorophore inside an object by illuminating this object from outside with near infrared light and measuring the light reemitted by the fluorophores at a different wavelength. The distribution of the light intensity inside the object generated by a source at the boundary, is described by

 −∇⋅(κx∇ux)+μxux =0, in Ω, (22) κx∂nux+ρxux =qx, on ∂Ω. (23)

We assume that , , is a bounded domain with smooth boundary enclosing the object under consideration. The light intensity emitted by the fluorophores is described by a similar equation

 −∇⋅(κm∇um)+μmum =cux, in Ω, (24) κm∂num+ρmum =0, on ∂Ω. (25)

The model parameters , , and , , characterize the optical properties of the medium at excitation and emission wavelength; we assume these parameters to be known, e.g., determined by independent measurements [1]. As shown in [8], the above linear model, which can be interpreted as a Born approximation or linearization, is a valid approximation for moderate fluorophore concentrations.

### 3.2 Forward operator

The forward problem in fluorescence optical tomography models an experiment in which the emitted light resulting from excitation with a known source and after interaction with a given fluorophore concentration is measured at the boundary. The measurable quantity is the outward photon flux, which is proportional to ; see [1] for details. The potential data for a single excitation with source measured by a detector with characteristic can be described by

 ⟨(Tc)qx,qm⟩=∫∂Ωumqmds(x), (26)

where and are determined by the boundary value problems Eq. 22Eq. 25. The inverse problem finally consists of determining the concentration of the fluorophore marker from measurements for multiple excitations and detectors .

We now illustrate that fluorescence optical tomography perfectly fits into the abstract setting of Section 2. Let us begin with defining the excitation operator

 U:H1(∂Ω)→H1(Ω),qx↦Uqx:=ux, (27)

which maps a source to the corresponding weak solution of Eq. 22Eq. 23. The interaction with the fluorophore can be described by the multiplication operator

 D:L2(Ω)→L(H1(Ω),H1(Ω)′),D(c)u=cu. (28)

In dimension , the product of two functions and , lies in and can thus be interpreted as a bounded linear functional on ; this shows that is a bounded linear operator. We further introduce the linear operator

 V:H1(∂Ω)→H1(Ω),qm↦Vqm:=vm, (29)

which maps to the weak solution of the adjoint emission problem

 −∇⋅(κm∇vm)+μmvm =0, in Ω, (30) κm∂nvm+ρmvm =qm, on ∂Ω. (31)

One can verify that is the dual of the solution operator of the system Eq. 24Eq. 25; see [8] for details. Hence we may express the forward operator as

 Tc=V′D(c)U. (32)

As function spaces we choose , , and .

In order to apply the results of Section 2, it remains to verify Section 2.2. We already showed that is a bounded linear operator. The following assertion states that also the remaining conditions on and hold true.

###### Lemma 6

The operators and defined in Eq. 27 and Eq. 29 are Hilbert-Schmidt and their singular values decay like and .

###### Proof

The Hilbert-Schmidt property follows immediately from the decay behavior of the singular values. To show the latter, let be the space of piecewise linear finite elements on a quasi-uniform triangulation of with meshsize . Let be the -orthogonal projection onto and arbitrary. Then standard approximation error estimates, see e.g., [4], yield

 ∥q−Qhq∥H−1/2(∂Ω)≲h3/2∥q∥H1(∂Ω).

A-priori estimates for elliptic PDEs yield , and hence can be continuously extended to an operator on ; see e.g. [10]. This yields

 ∥U−UQh∥L(H1(∂Ω),H1(Ω))≲h3/2≲k−3/(2d−2),

where is the dimension of the space . From the min-max characterization of the singular values Eq. 14, we may therefore conclude that as required. The result for follows in the same way.

###### Remark 4

If prior knowledge on the support of the fluorophore concentration is available, which is frequently encountered in practice, elliptic regularity [10] implies exponential decay of the singular values and . In such a situation, the rank and in Lemmas 5 and 3 will depend only logarithmically on the noise level , and an accurate approximation of very low rank can be found.

## 4 Numerical illustration

We will now discuss in detail the implementation of the model reduction approach presented in Section 2 for the fluorescence optical tomography problem and demonstrate its viability by some numerical tests.

### 4.1 Truth approximation

Let denote a quasi-uniform conforming triangulation of the domain with denoting the mesh size. For the discretization of Eq. 22Eq. 23 and Eq. 30Eq. 31, we use a standard finite element method with continuous piecewise linear polynomials; the corresponding spaces then have dimension each. We choose the same finite element space also for the approximation of the concentration . The sources for the forward and the adjoint problem are approximated by piecewise linear functions on the boundary of the same mesh ; hence , have dimension

. All approximation spaces are equipped with the topologies induced by their infinite dimensional counterparts. Standard error estimates allow to quantify the discretization errors in the resulting truth approximation of the forward operator and to establish the

-approximation property for small enough. The error introduced by the discretization can therefore be assumed to be negligible.

Let us briefly discuss in a bit more detail the algebraic structure of the resulting problems arising in the truth approximation. Choosing standard nodal bases, the finite element approximation of problem Eq. 22Eq. 23 leads to the linear system

 (Kx+Mx+Rx)U =ExQx. (33)

Here are the stiffness and mass matrices with coefficients , , and the matrices , stem from the discretization of the boundary conditions. The columns of regular represent the individual independent sources in the basis of . Any excitation generated by a source in can thus be expressed as a linear combination of columns of the excitation matrix , which serves as a discrete counterpart of the operator . In a similar manner, the discretization of the adjoint problem Eq. 30Eq. 31 leads to

 (Km+Mm+Rm)V =EmQm. (34)

whose solution matrix can be interpreted as the discrete counterpart of the operator . The system matrices , , , and have a similar meaning as above, and the columns of represent the individual detector characteristics. The algebraic form of the truth approximation finally reads

 T(c)=V⊤D(c)U, (35)

where