# Adaptive cross approximation for Tikhonov regularization in general form

Many problems in Science and Engineering give rise to linear integral equations of the first kind with a smooth kernel. Discretization of the integral operator yields a matrix, whose singular values cluster at the origin. We describe the approximation of such matrices by adaptive cross approximation, which avoids forming the entire matrix. The choice of the number of steps of adaptive cross approximation is discussed. The discretized right-hand side represents data that commonly are contaminated by measurement error. Solution of the linear system of equations so obtained is not meaningful because the matrix determined by adaptive cross approximation is rank-deficient. We remedy this difficulty by using Tikhonov regularization and discuss how a fairly general regularization matrix can be used. Computed examples illustrate that the use of a regularization matrix different from the identity can improve the quality of the computed approximate solutions significantly.

## Authors

• 5 publications
• 11 publications
• 11 publications
05/10/2022

### Adaptive ℋ-Matrix Computations in Linear Elasticity

This article deals with the adaptive and approximative computation of th...
07/21/2020

### Application of orthonormal Bernoulli polynomials for approximate solution of some Volterra integral equations

In this work, a new approach has been developed to obtain numerical solu...
02/06/2021

### Solving Fredholm second-kind integral equations with singular right-hand sides on non-smooth boundaries

A numerical scheme is presented for the solution of Fredholm second-kind...
01/28/2022

### Regularized minimal-norm solution of an overdetermined system of first kind integral equations

Overdetermined systems of first kind integral equations appear in many a...
03/24/2022

### On the Fast Direct Solution of a Preconditioned Electromagnetic Integral Equation

This work presents a fast direct solver strategy for electromagnetic int...
06/02/2020

### Kernel-independent adaptive construction of ℋ^2-matrix approximations

A method for the kernel-independent construction of ℋ^2-matrix approxima...
05/26/2019

### On Learning Over-parameterized Neural Networks: A Functional Approximation Prospective

We consider training over-parameterized two-layer neural networks with R...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Linear integral equations of the first kind,

 ∫Ω1κ(s,t)x(t)dt=g(s),s∈Ω2, (1)

with a smooth kernel arise in many applications, including remote sensing, computerized tomography, and image restoration. Here denotes a subset of for some positive integer . The solution of (1) is an ill-posed problem, because the singular values of the integral operator cluster at the origin; see, e.g., [13, 22] for introductions to ill-posed problems.

Discretization of (1) yields a linear system of equations

 A\boldmath{x}=\boldmath{g},A∈Rn×n,\boldmath{g}∈Rn, (2)

with a matrix, whose singular values coalesce at the origin. This makes the matrix severely ill-conditioned and possibly rank-deficient; we measure the conditioning of a matrix with its condition number, which is the ratio of the largest and smallest singular values. Linear systems of equations with a matrix of this kind are often referred to as discrete ill-posed problems; see, e.g., [23]. We will for notational simplicity assume the matrix to be square, however, the method described also can be applied, after minor modifications, when is rectangular, in which case the linear system of equations (2) is replaced by a least-squares problem.

In many applications, the right-hand side vector

in (2) represents measured data and is contaminated by a measurement error . Due to the severe ill-conditioning of , straightforward solution of (2) typically yields a computed solution that is severely contaminated by propagated error, and therefore is not useful. To circumvent this difficulty, the linear system of equations (2) commonly is replaced by a nearby problem, whose solution is less sensitive to the error in . This replacement is referred to as regularization. Tikhonov regularization is possibly the most popular and well understood regularization method. It replaces the linear system of equations (2) by a penalized least-squares problem of the form

 min\boldmath{x}∈Rn{∥A% \boldmath{x}−\boldmath{g}∥2+μ∥L\boldmath{x}∥2}, (3)

where is referred to as the regularization matrix and as the regularization parameter. The problem (3) is said to be in standard form when is the identity; otherwise (3) is in general form. Throughout this paper denotes the Euclidean vector norm or the spectral matrix norm.

The choice of regularization parameter is important for the quality of the computed solution: a too small value results in a computed solution that is contaminated by needlessly much propagated error, while a too large value yields an unnecessarily smooth solution that may lack details of interest. Generally, a suitable value of is not known a priori, but has to be computed during the solution process. This typically requires that (3) be solved for several -values. Methods for determining a suitable value of include the L-curve criterion, generalized cross validation, and the discrepancy principle; see, e.g., [6, 7, 15, 28, 32, 33] for discussions of properties of these and other methods.

The matrix is assumed to be chosen so that

 N(A)∩N(L)={\boldmath{0}}, (4)

where denotes the null space of the matrix . Then the Tikhonov minimization problem (3) has the unique solution

 \boldmath{x}μ:=(ATA+μLTL)−1AT\boldmath{g} (5)

for any ; see, e.g, [23] for details. Here and below the superscript denotes transposition. We are interested in the situation when the matrices and are so large that it is impossible or undesirable to compute the solution (5) by Cholesky factorization of the matrix . In fact, we would like to avoid evaluating all the entries of . We describe how can be approximated by a much smaller matrix, without evaluating all matrix entries, by applying adaptive cross approximation.

The application of cross approximation to matrices that stem from the discretization of Fredholm integral equations of the second kind has received considerable attention in the literature, see, e.g., [3, 4, 17, 20, 37]; however, the use of cross approximation in the context of solving linear discrete ill-posed problems has not been thoroughly studied.

The use of adaptive approximation to the approximate solution of (3) when is the identity is discussed in [29]. This paper extends this discussion to general regularization matrices . Our interest in this extension of the method in [29] stems from the fact that the use of a suitably chosen regularization matrix can deliver solutions of higher quality than ; see, e.g., [25, 31, 35] for illustrations and discussions on various ways of constructing regularization matrices. Roughly, should be chosen so as not to damp known important features of the desired solution, while damping the propagated error stemming from the error in .

In the computed examples of Section 4, we use the discrepancy principle to determine . Let denote the unknown error-free vector associated with the right-hand side in (2), i.e., . Assume that the linear system of equations with the error-free right-hand side,

 A\boldmath{x}=ˆ\boldmath{g}, (6)

is consistent and that a fairly accurate bound is known. The discrepancy principle prescribes that the regularization parameter be determined so that the Tikhonov solution (5) satisfies

 ∥A\boldmath{x}μ−\boldmath{g}∥=ηδ, (7)

where is a user-specified parameter that is independent of . It can be shown that when tends to zero, converges to the minimal-norm solution, , of (6); see, e.g., [13] for a proof in a Hilbert space setting. We remark that the determination of such that satisfies (7) typically requires the solution of (3) for several -values.

The present paper is concerned with the situation when the matrix in (2) is large. Then the evaluation of all entries of can be quite time-consuming. Cross approximation, also known as skeleton approximation, of reduces this time by approximating by a matrix that consists of rows and columns of . We would like to choose the rows and columns of so that approximates well and is easy to compute with.

This paper is organized as follows. Section 2 reviews the application of adaptive cross approximation to the approximation of by a matrix of low rank. In Section 3, we describe the application of adaptive cross approximation to the approximation of the Tikhonov equation (3). Section 4 reports a few computed examples, and concluding remarks can be found in Section 5.

We conclude this section with an example that leads to a large linear discrete ill-posed problem, whose solution is difficult to compute using straightforward discretization.

Example 1.1. Consider the solution of the Fredholm integral equation of the first kind

 ∫Sσ(\boldmath{y})4πϵ0∥\boldmath{x}−\boldmath{y}∥d\boldmath{% y}=ϕ(\boldmath{x}),\boldmath{x}∈S, (8)

where is a given electric potential, is a surface with electrodes, denotes the density of the charge on , and stands for the electric permittivity in vacuum. We would like to determine from , and assume that is chosen so that (8) has a solution. The computation of a solution of (8) is an ill-posed problem. Using a weak formulation and discretization lead to a dense symmetric matrix with entries

 kij=∫S∫Svj(\boldmath{x})vi(\boldmath% {y})4πϵ0∥\boldmath{x}−% \boldmath{y}∥d\boldmath{y}d\boldmath{x},i,j=1,2,…,n.

This matrix can be expensive to store and handle when the discretization is fine. Employing a hierarchical compression with -matrices reduces the required storage to , with a large constant hidden in the , and allows matrix-vector product evaluations in arithmetic floating point operations (flops); see [5].

A fine discretization with nodes results in a large, , dense matrix. We used the H2Lib library [27] for the computations and base this example on one of the standard examples provided in this library. Without compression, 512 GB of memory are needed to store the matrix. On a laptop computer with an Intel Core i710710U CPU and 16 GB of RAM it took 1103 s to assemble the matrix in the compressed -matrix format. The matrix required 15.45 GB of storage, thus almost all the available RAM. Carrying out one matrix-vector product evaluation required 1596 s, that is 44% more time than for assembling the matrix. The reason for this is that the flops require a significant amount of communication between faster and slower storage. This example illustrates that there are linear discrete ill-posed problems of interest that are difficult to solve on a laptop computer, even if a significant amount of memory is available. It therefore is important to develop methods that are able to determine approximations of dense matrices that requires less computer storage and less CPU time for the evaluation of matrix-vector products.

## 2 Adaptive cross approximation

Cross approximation of a large matrix determines an approximation of rank at most . All entries of can be evaluated much faster than all entries of , because is constructed from only rows and columns of . We would like to be an accurate approximation of . This is achieved by a careful choice of the rows and columns of that define the matrix . A cross approximation method is said to be adaptive when the rows and columns of that determine (and ) are chosen depending on properties of revealed during the computations; see [2, 4].

We outline the adaptive cross approximation method for a general square nonsymmetric matrix described in [29]. This method is an adaptation of the scheme in [17] to the approximation of the matrix of linear discrete ill-posed problems. When is symmetric, the matrix can be chosen to be symmetric. This roughly halves the storage requirement for . Both the situations when is symmetric positive definite or symmetric indefinite are discussed in [29]. We therefore will not dwell on these special cases in the present paper.

Let the matrix be nonsymmetric and choose rows of with indices in . We let denote index vectors with entries in . The submatrices and of are made up of the rows with indices and the columns with indices , respectively. Moreover, the core matrix is made up of rows and columns of . Assume that this matrix is nonsingular. Then the rows and columns of the matrix

 Mk=A(:,j)A−1(i,j)A(i,:)

are equal to the corresponding rows and columns of ; when is of rank , we have .

Goreinov et al. [21] show that it is beneficial to choose the index vectors and so that is a submatrix of of maximal volume, i.e., so that the modulus of the determinant of is maximal. However, it is difficult to determine such index vectors and . We therefore seek to determine a low-rank matrix that is a sufficiently accurate approximation of by a greedy algorithm. Suppose that we already have computed an approximation

 Mk−1=k−1∑ℓ=1\boldmath{w}(c)ℓ(\boldmath{% w}(r)ℓ)T,\boldmath{w}(c)ℓ,% \boldmath{w}(r)ℓ∈Rn,

of rank at most of . To compute the next approximation, , of of rank at most , we determine a row index and a column index by looking for the index of the maximum element in magnitude in the previously computed vectors (for index ) and (for index ). The vector can be chosen as an arbitrary row of . We will let be the first row of in the computed examples of Section 4. The vector can be chosen in a similar way.

In the simplest form of cross approximation, the determination of the vectors and only requires the entries in row and column of and the elements of already computed vectors and , :

 (\boldmath{w}(r)k)j = Ai∗k,j−k−1∑ℓ=1(\boldmath{w}(c)ℓ)i∗k(\boldmath{w}(r)ℓ)j,δ=(\boldmath{w}(r)k)j∗k, (\boldmath{w}(c)k)i = 1δ(Ai,j∗k−k−1∑ℓ=1(\boldmath{w}(c)ℓ)i(\boldmath{w}(r)ℓ)j∗k).

A new skeleton is obtained from the remainder,

 Rk=A−k∑ℓ=1\boldmath{w}(c)ℓ(\boldmath{w}(r)ℓ)T,

without explicitly computing all entries of the matrix .

The required number of rank-one matrices, , that make up is generally not known a priori. We would like the difference to be of small norm. However, we cannot evaluate this difference, because most entries of are not known. Following [17], we include randomly chosen matrix entries , for , with . Define for future reference the set

 Π={(iℓ,jℓ), ∀ ℓ=1,2,…,t} (9)

When a new skeleton is determined, the values of these entries are updated by Subtraction from the available skeletons,

 (Rk)iℓ,jℓ=(Rk−1)iℓ,jℓ−(\boldmath{w% }(c)k)iℓ(\boldmath{w}(r)k)jℓ, (10)

with . The values are used in subsection 3.3 as part of the stopping criterion to determine the final value for . The value of is a percentage of the total number of entries. The choice of should depend on properties of the matrix ; see [17, 29] for further details. An algorithm is presented in [29].

## 3 Tikhonov regularization in general form

This section discusses how to combine adaptive cross approximation with Tikhonov regularization in general form.

### 3.1 Using adaptive cross approximation

The matrix , whose computation was outlined in the previous section, is of the form

 Mk=W(c)k(W(r)k)T, (11)

where

 W(c)k=[\boldmath{w}(c)1,\boldmath{w}(c)2,…,\boldmath{w}(c)k]∈Rn×k,W(r)k=[\boldmath{w}(r)1,\boldmath{w}(r)2,…,\boldmath{w}(r)k]∈Rn×k.

Compute the skinny QR factorizations

 W(c)k=Q(c)kR(c)k,W(r)k=Q(r)kR(r)k, (12)

where the matrices have orthonormal columns and the matrices are upper triangular. The factorizations (12) can be computed by the Householder-QR method or by factorization methods that are designed to perform efficiently on modern computers, such as the methods described in [9, 14, 38].

Combining (11) and (12) yields

 Mk=Q(c)kR(c)k(R(r)k)T(Q(r)k)T. (13)

Replacing by in (3) gives the minimization problem

 min\boldmath{x}∈Rn{∥Mk% \boldmath{x}−\boldmath{g}∥2+μ∥L\boldmath{x}∥2}, (14)

which can be solved in several ways. If the matrix has a special structure, such as being banded with small bandwidth, then it may be attractive to transform (14) to standard form by a technique described by Eldén [12]. Regularization matrices with a small bandwidth arise, e.g., when represents a finite difference approximation of a differential operator in one space-dimension. It also is easy to transform (14) to standard form when is an orthogonal projector; see [30].

In the remainder of this section, we discuss the situation when is such that transformation of (14) to standard form as described in [12] is too expensive to be attractive. This is the case, for instance, when represents a finite difference approximation of a differential operator in two or more space-dimensions. This kind of matrices will be used in computed examples of Section 4.

We describe an approaches to compute a solution of (14) and start with the simplest one. The matrix has a null space of large dimension (at least ). Therefore the Tikhonov minimization problem (14) is not guaranteed to have a unique solution. To remedy this difficulty, we require the solution of (14) to live in a subspace of fairly low dimension. A simple solution method is obtained when using the solution subspace . Then we obtain the minimization problem

 min\boldmath{y}∈Rk{∥MkQ(r)k\boldmath{y}−\boldmath{g}∥2+μ∥LQ(r)k% \boldmath{y}∥2}, (15)

which has a unique solution if and only if

 N(MkQ(r)k)∩N(LQ(r)k)={% \boldmath{0}}. (16)

This holds, in particular, when the triangular matrices and in (13) are nonsingular. We found (16) to hold in all computed examples that we solved.

Introduce the QR factorization

 LQ(r)k=Q(L)kR(L)k, (17)

where the matrix has orthonormal columns and is upper triangular. We note that since the matrix typically is very sparse and is not large, the left-hand side of (17) generally can be evaluated quite quickly also when is large. The minimization problem (15) yields the small problem

 min\boldmath{y}∈Rk{∥R(c)k(R(r)k)T\boldmath{y}−(Q(c)k)T\boldmath{g}∥2+μ∥R(L)k\boldmath{y}∥2}. (18)

This problem can be solved in several ways: We may compute a generalized SVD (GSVD) of the matrix pair (see, e.g., [10, 23]), or apply a cheaper reduction of the matrix pair that can be used when the generalized singular values of the matrix pair are not explicitly required; see [11].

When the matrix is nonsingular and not very ill-conditioned, which is the case in many applications, one may consider transforming the minimization problem (18) to standard form by the substitution . This yields the problem

 min\boldmath{z}∈Rk{∥R(c)k(R(r)k)T(R(L)k)−1\boldmath{z}−(Q(c)k)T%\boldmath$g$∥2+μ∥\boldmath{z}∥2}, (19)

which easily can be solved, e.g., by computing the singular value decomposition of the matrix

. The solution of (19) yields the solution of (15), from which we determine the approximate solution of (3). The solution of (19) is cheaper than the solution of (18) with the aid of the GSVD; see [19] for counts of the arithmetic floating point operations necessary to compute the GSVD of a pair of matrices, and the SVD of a matrix.

### 3.2 The discrepancy principle

We turn to the computation of the regularization parameter

by the discrepancy principle. Assume for the moment that the matrix

is available. Then we can solve equation (7) for by using a zero-finder such as Newton’s method or one of the zero-finders described in [8, 34]. The theoretical justification of the discrepancy principle requires that the unavailable error-free vector associated with the available error-contaminated vector satisfies .

Now consider the application of the discrepancy principle to the determination of the regularization parameter in (14). Generally, and, therefore, the discrepancy principle cannot be applied when solving (14) without modification. In the computed examples, we determine so that the computed solution of (14) satisfies

 ∥Mk\boldmath{x}μ−Q(c)k(Q(c)k)T\boldmath% {g}∥=ηδ; (20)

cf. (7). If the matrices and in (13) are nonsingular, which generally is the case, then is an orthogonal projector onto , and lives in . Equation (20) can be solved for by using a zero-finder.

### 3.3 Stopping criterion for the adaptive cross approximation algorithm

In view of (7), we would like to determine a value of the regularization parameter such that

 ∥A\boldmath{x}μ(Mk)−\boldmath{g}∥=ηδ. (21)

Even though the matrix is not available, we can determine an approximate upper bound for the left-hand side of (21) as follows:

 ∥A\boldmath{x}μ(Mk)−\boldmath{g}∥ = ∥(A−Mk)\boldmath{x}μ(Mk)+Mk% \boldmath{x}μ(Mk)−\boldmath{g}∥ (22) ≤ ∥A−Mk∥∥\boldmath{x}μ(Mk)∥+∥Mk% \boldmath{x}μ(Mk)−\boldmath{g}∥ ⪅ Sk∥\boldmath{x}μ(Mk)∥+∥Mk% \boldmath{x}μ(Mk)−\boldmath{g}∥,

where is an approximation of . Based on the values of from (10) such an approximation can be computed as

 ∥A−Mk∥ ≤ ∥A−Mk∥F (23) ≈ √∑(iℓ,jℓ)∈Π|(A−Mk)iℓ,jℓ|2(mn)/|Π| = √∑(iℓ,jℓ)∈Π|(Rk)iℓ,jℓ|2(mn)/|Π| =: Sk,

where denotes the matrix Frobenius norm, and the set is defined by (9) with elements.

The number of step of the adaptive cross approximation algorithm is chosen to be as small as possible such that there exists a such that

 Sk∥\boldmath{x}μ(Mk)∥=η1δ and ∥Mk\boldmath{x}μ(Mk)−\boldmath{g}∥≤η2δ.

In this case we have, based on (22)

 ∥A\boldmath{x}μ(Mk)−\boldmath{g}∥ ⪅ Sk∥\boldmath{x}μ(Mk)∥+∥Mk% \boldmath{x}μ(Mk)−\boldmath{g}∥ ⪅ (η1+η2)δ.

## 4 Numerical experiments

This section describes a few computed examples with the adaptive cross approximation method. For problems in one space-dimension, we will use the regularization matrices , where

denotes the identity matrix, and

 L1=12⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣1−1\Large 0−1−1−1−1⋱⋱\Large 0−1−1⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦∈R(n−1)×n (24)

or

 L2=14⎡⎢ ⎢ ⎢ ⎢ ⎢⎣−1−2−1% \Large 0−1−2−1⋱⋱⋱\Large 0−1−2−1⎤⎥ ⎥ ⎥ ⎥ ⎥⎦∈R(n−2)×n, (25)

which approximate a multiple of the first and second order derivative operators, respectively, assuming that is the discretization of a function at equidistant points on a bounded interval . For problems in two space-dimensions, we use the regularization matrices

 L1,⊗=[I⊗L1L1⊗I] (26)

or

 L2,⊗=[I⊗L2L2⊗I], (27)

where stands for the Kronecker product. These choices of regularization matrices are fairly common; see, e.g., [23, 26, 32] for illustrations.

The examples are taken from the Regularization Tools [24] and from IR Tools [18]. In all examples, the number of elements in the set is . The values of the parameters are chosen as .

Experiment 1: We consider the problem “gravity” of size from [24]. The aim of this experiment is to illustrate that the quantities defined by (23) provide quite accurate approximations of . This is displayed by Figure 1.

Experiment 2: We again consider the example “gravity” from [24] of size . Let . This example illustrates that for certain problems only a fairly small number of steps of the adaptive cross approximation algorithm suffices to yield a satisfactory result. The example also shows that it may be very beneficial to use a regularization matrix different from the identity matrix. The maximum number of adaptive cross approximation steps is . Results are shown for . The quality of the computed solution is measured by the relative error . The horizontal axis of Figure 2 shows the number of steps of adaptive cross approximation; the vertical line indicates that for each one of the choices of , the stopping criterion for the method is satisfied at step .

Experiment 3: This experiment is similar to Experiment 2, but for problem “baart” from [24]. Results are displayed in Figure 3. Also in this example it is beneficial to use a regularization matrix different from the identity.

Experiment 4: This expperiment is similar to Experiment 2, but for problem “phillips” from [24]. The result is shown in Figure 4. For this example all three regularization matrices used perform about equally well. The singular values of the matrix decay to zero slower for this example than for the previous examples. Therefore more steps with the adaptive cross approximation algorithm have to be carried out.

Experiment 5: We consider the example EXdiffusion_rrgmres from the IR Toolbox [18]. The size of the problem is and . The other parameters are as in Experiment 2. The “best” solution determined by the example script in the IR Toolbox has relative error (when compared with the exact solution) , while for our algorithm in iteration step reaches a relative error of .

Experiment 6: The same as in Experiment 2 where we consider with the matrix of the “baart” regularization problem. The matrix is of order . We use (27) for , and as regularization matrices. The relative errors of the computed solutions are displayed in Figure 6.

## 5 Conclusion

This paper discusses the application of adaptive cross approximation to Tikhonov regularization problems in general form. The computed examples illustrate that often only quite few cross approximation steps are required to yield useful approximate solutions. Particular attention is given to the stopping criterion for adaptive cross approximation.

## Acknowledgment

This research was partially supported by the Fund for Scientific Research–Flanders (Belgium), Structured Low-Rank Matrix/Tensor Approximation: Numerical Optimization-Based Algorithms and Applications: SeLMA, EOS 30468160, the KU Leuven Research Fund, Numerical Linear Algebra and Polynomial Computations, OT C14/17/073.

## References

• [1] M. L. Baart, The use of auto-correlation for pseudo-rank determination in noisy ill-conditioned least-squares problems, IMA J. Numer. Anal., 2 (1982), pp. 241–247.
• [2] M. Bebendorf, Approximation of boundary element matrices, Numer. Math., 86 (2000), pp. 565–589.
• [3] M. Bebendorf and R. Grzibovski, Accelerating Galerkin BEM for linear elasticity using adaptive cross approximation, Math. Methods Appl. Sci., 29 (2006), pp. 1721–1747.
• [4] M. Bebendorf and S. Rjasanow, Adaptive low-rank approximation of collocation matrices, Computing, 70 (2003), pp. 1–24.
• [5] S. Börm, Efficient Numerical Methods for Non-Local Operators: -Matrix Compression, Algorithms and Analysis, European Math. Society, Zürich, 2010.
• [6] C. Brezinski, G. Rodriguez, and S. Seatzu,

Error estimates for linear systems with applications to regularization

, Numer. Algorithms, 49 (2008), pp. 85–104.
• [7] C. Brezinski, M. Redivo-Zaglia, G. Rodriguez, and S. Seatzu, Multi-parameter regularization techniques for ill-conditioned linear systems, Numer. Math., 94 (2003), pp. 203–224.
• [8] A. Buccini, M. Pasha, and L. Reichel, Generalized singular value decomposition with iterated Tikhonov regularization, J. Comput. Appl. Math., 373 (2020), Art. 112276
• [9] D. Calvetti, J. Petersen, and L. Reichel, A parallel implementation of the GMRES algorithm, in Numerical Linear Algebra, eds. L. Reichel, A. Ruttan, and R. S. Varga, de Gruyter, Berlin, 1993, pp. 31–46.
• [10] L. Dykes, S. Noschese, L. Reichel, Rescaling the GSVD with application to ill-posed problems, Numer. Algorithms, 68 (2015), pp. 531–545.
• [11] L. Dykes, L. Reichel, Simplified GSVD computations for the solution of linear discrete ill-posed problems, J. Comput. Appl. Math., 255 (2013), pp. 15–27.
• [12] L. Eldén, A weighted pseudoinverse, generalized singular values, and constrained least squares problems, BIT Numer. Math., 22 (1982), pp. 487–501.
• [13] H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer, Dordrecht, 1996.
• [14] J. Erhel, A parallel GMRES version for general sparse matrices, Electron. Trans. Numer. Anal., 3 (1995), pp. 160–176.
• [15] C. Fenu, L. Reichel, G. Rodriguez, and H. Sadok, GCV for Tikhonov regularization by partial SVD, BIT Numer. Math., 57 (2017), pp. 1019–1039.
• [16] L. Fox and E. T. Goodwin, The numerical solution of non-singular linear integral equations, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 245:902 (1953), pp. 501–534.
• [17] K. Frederix and M. Van Barel, Solving a large dense linear system by adaptive cross approximation, J. Comput. Appl. Math., 234 (2010), pp. 3181–3195.
• [18] S. Gazzola, P. C. Hansen, and J. G. Nagy, IR Tools: A MATLAB package of iterative regularization methods and large-scale test problems, Numer. Algorithms, 81 (2019), pp. 773–811.
• [19] G. H. Golub and C. F. Van Loan, Matrix Computations, 4th ed., Johns Hopkins University Press, Baltimore, 2013.
• [20] S. A. Goreinov, E. E. Tyrtyshnikov, and N. L. Zamarashkin, A theory of pseudo-skeleton approximation, Linear Algebra Appl., 261 (1997), pp. 1–21.
• [21] S. A. Goreinov, N. L. Zamarashkin, and E. E. Tyrtyshnikov, Pseudo-skeleton approximations by matrices of maximal volume, Math. Notes, 62 (1997), pp. 515–519.
• [22] C. W. Groetsch, The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind, Pitman, Boston, 1984.
• [23] P. C. Hansen, Rank-Deficient and Discrete Ill-Posed Problems, SIAM, Philadelphia, 1998.
• [24] P. C. Hansen, Regularization Tools version 4.0 for Matlab 7.3, Numer. Algorithms, 46 (2007), pp. 189–194. Software is available in Netlib at http://www.netlib.org.
• [25] G. Huang, S. Noschese, and L. Reichel, Regularization matrices determined by matrix nearness problems, Linear Algebra Appl., 502 (2016), pp. 41–57.
• [26] G. Huang, L. Reichel, and F. Yin, On the choice of subspace for large-scale Tikhonov regularization problems in general form, Numer. Algorithms, 81 (2019), pp. 33–55.
• [27] H2Lib, http://www.h2lib.org/, 2015–2020.
• [28] S. Kindermann and K. Raik, A simplified L-curve method as error estimator, Electron. Trans. Numer. Anal., 53 (2020), pp. 217–238.
• [29] T. Mach, L. Reichel, M. Van Barel, and R. Vandebril, Adaptive cross approximation for ill-posed problems, J. Comput. Appl. Math., 303 (2016), pp. 206–217.
• [30] S. Morigi, L. Reichel, and F. Sgallari, Orthogonal projection regularization operators, Numer. Algorithms, 44 (2007), pp. 99–114.
• [31] S. Noschese and L. Reichel, Inverse problems for regularization matrices, Numer. Algorithms, 60 (2012), pp. 531–544.
• [32] Y. Park, L. Reichel, G. Rodriguez, and X. Yu, Parameter determination for Tikhonov regularization problems in general form, J. Comput. Appl. Math., 343 (2018), pp. 12–25.
• [33] L. Reichel and G. Rodriguez, Old and new parameter choice rules for discrete ill-posed problems, Numer. Algorithms, 63 (2013), pp. 65–87.
• [34] L. Reichel and A. Shyshkov, A new zero-finder for Tikhonov regularization, BIT Numer. Math., 48 (2008), pp. 627–643.
• [35] L. Reichel and Q. Ye, Simple square smoothing regularization operators, Electron. Trans. Numer. Anal., 33 (2009), pp. 63–83.
• [36] C. B. Shaw, Jr., Improvements of the resolution of an instrument by numerical solution of an integral equation, J. Math. Anal. Appl., 37 (1972), pp. 83–112.
• [37] E. E. Tyrtyshnikov, Incomplete cross approximation in the mosaic-skeleton method, Computing, 64 (2000), pp. 367–380.
• [38] Y. Yamamoto, Y. Nakatsukasa, Y. Yanagisawa, and T. Fukaya, Roundoff error analysis of the CholeskyQR2 algorithm, Electron. Trans. Numer. Anal., 44 (2015), pp. 306–326.