Fast Iterative Solver for the Optimal Control of Time-Dependent PDEs with Crank-Nicolson Discretization in Time

In this article, we derive a new, fast, and robust preconditioned iterative solution strategy for the all-at-once solution of optimal control problems with time-dependent PDEs as constraints, including the heat equation and the non-steady convection–diffusion equation. After applying an optimize-then-discretize approach, one is faced with continuous first-order optimality conditions consisting of a coupled system of PDEs. As opposed to most work in preconditioning the resulting discretized systems, where a (first-order accurate) backward Euler method is used for the discretization of the time derivative, we employ a (second-order accurate) Crank–Nicolson method in time. We apply a carefully tailored invertible transformation for symmetrizing the matrix, and then derive an optimal preconditioner for the saddle-point system obtained. The key components of this preconditioner are an accurate mass matrix approximation, a good approximation of the Schur complement, and an appropriate multigrid process to apply this latter approximation—these are constructed using our work in transforming the matrix system. We prove the optimality of the approximation of the Schur complement through bounds on the eigenvalues, and test our solver against a widely-used preconditioner for the linear system arising from a backward Euler discretization. These demonstrate the effectiveness and robustness of our solver with respect to mesh-sizes, regularization parameter, and diffusion coefficient.

There are no comments yet.

Authors

• 2 publications
• 8 publications
• Parameter-Robust Preconditioning for Oseen Iteration Applied to Stationary and Instationary Navier–Stokes Control

We derive novel, fast, and parameter-robust preconditioned iterative met...
07/31/2021 ∙ by Santolo Leveque, et al. ∙ 0

• A Parallel-in-Time Preconditioner for the Schur Complement of Parabolic Optimal Control Problems

For optimal control problems constrained by a initial-valued parabolic P...
09/26/2021 ∙ by Xue-lei Lin, et al. ∙ 0

• ParaDIAG: Parallel-in-Time Algorithms Based on the Diagonalization Technique

In 2008, Maday and Rønquist introduced an interesting new approach for t...
05/19/2020 ∙ by Martin J. Gander, et al. ∙ 0

• Approximation of an optimal control problem for the time-fractional Fokker-Planck equation

In this paper, we study the numerical approximation of a system of PDEs ...
06/05/2020 ∙ by Fabio Camilli, et al. ∙ 0

• Robust discretization and solvers for elliptic optimal control problems with energy regularization

We consider the finite element discretization and the iterative solution...
02/06/2021 ∙ by Ulrich Langer, et al. ∙ 0

• An Improved Penalty Algorithm using Model Order Reduction for MIPDECO problems with partial observations

This work addresses optimal control problems governed by a linear time-d...
10/07/2021 ∙ by Dominik Garmatter, et al. ∙ 0

• Implicit Neural Solver for Time-dependent Linear PDEs with Convergence Guarantee

Fast and accurate solution of time-dependent partial differential equati...
10/08/2019 ∙ by Suprosanna, et al. ∙ 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

PDE-constrained optimization is a subject area which has attracted significant interest in recent years (see [19, 45] for a comprehensive overview of the field). Due the complex structure and high dimensionality of these problems when an accurate discrete solution is sought, preconditioned iterative solvers have been employed for the ‘all-at-once’ resolution of such formulations, see for example [37, 39] for early work in this direction. In this paper, we consider fast and robust preconditioned solvers for matrix systems arising from the optimize-then-discretize approach applied to time-dependent PDE-constrained optimization problems. We examine the optimal control of the heat equation with Dirichlet boundary conditions, to allow benchmarking of our method against a widely-applied preconditioned iterative method for its solution, and time-dependent convection–diffusion control problems.

When preconditioners are sought for certain time-dependent problems, it is typical to apply a (first-order accurate) backward Euler method in time, as this leads to particularly convenient structures within the matrix and facilitates effective preconditioning; see [33] for a mesh- and -robust preconditioner for the heat control problem, and [9, 32, 42, 44, 47] for applications to different problems. However, the required discretization strategy results in slower convergence in time than space: if a method is second-order accurate in space, it is reasonable to choose , where is the time step and is the mesh-size in space. This results in a matrix system of huge dimension, and hence a high CPU time is required for its solution. The question we wish to investigate here is whether applying a higher-order Crank–Nicolson method in time is beneficial, due to the reduced dimension of the matrix system required to obtain a similar discretization error. The challenge here is that the much more complex structure of the resulting matrix system makes preconditioning a highly non-trivial task, so a more sophisticated numerical method and preconditioning strategy need to be devised to achieve fast and robust convergence of the iterative solver.

The remainder of this article is structured as follows. In Section 2 we introduce the problems we consider, that is time-dependent convection–diffusion control and heat control, and outline the matrices arising upon discretization of such problems as well as a stabilization technique used for convection–diffusion control problems. In Section 3 we describe the matrix systems obtained upon discretization of the first-order optimality conditions, and in particular demonstrate a suitable transformation which allows symmetrization of the matrix system obtained from the Crank–Nicolson method. This enables us to apply a symmetric iterative solver such as MINRES [30], which is highly desirable from the perspective of proving convergence of the iterative method. In Section 4 we derive our proposed new preconditioner, using saddle-point theory along with suitable approximations of the -block and Schur complement, and provide eigenvalue results for the preconditioned matrix system. In Section 5 we benchmark our method against the widely-used backward Euler method, coupled with the preconditioner derived in [33], and demonstrate our new preconditioner’s efficiency and robustness with respect to all parameters involved in the convection–diffusion control problem.

2 Problem Formulation

In this article we consider the fast and robust numerical solution of time-dependent PDE-constrained optimization problems. In particular, we examine distributed convection–diffusion control problems of the form:

 miny,u  J(y,u)=12∫tf0∫Ω|y(x,t)−ˆy(x,t)|2dΩdt+β2∫tf0∫Ω|u(x,t)|2dΩdt (1)

subject to

 (2)

where the variables , , and are the state, desired state, and control variables, respectively, is a regularization parameter, is the diffusion coefficient, and

is a divergence-free wind vector (i.e.,

). The problem is solved on a spatial domain , , with boundary , , up to a final time , that is . Here represents the normal derivative of on . The functions , and are known.

In (2), the term denotes the diffusive element, and the term represents convection. In physical (real-world) problems, as pointed out for example in [10], convection typically plays a more significant physical role than diffusion, so for many practical problems. However this in turn makes the problem more difficult to solve [10, 36] as the solution procedure will need to be robust with respect to the direction of the wind and any boundary or internal layers that form. The presence of boundary or internal layers is also an issue that we have to deal with when discretizing the convection–diffusion differential operator. Indeed, when solving a convection–diffusion problem, a stabilization procedure is often utilized in order to ‘capture’ all the layers.

Though we will test our solver on the convection–diffusion control problem (1)–(2), a particular case of interest arises when , , and , giving the following heat control problem with Dirichlet boundary conditions:

 miny,u J(y,u) (3) s.t. ⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩∂y∂t−∇2y=u+f(x,t)inΩ×(0,tf),y(x,t)=g(x,t)on∂Ω×(0,tf),y(x,0)=y0(x)inΩ. (4)

This problem provides a valuable benchmark of our method against widely-used preconditioned iterative solution with a backward Euler method in time [33]. We note that our method can be readily generalized to heat control problems with Neumann and mixed boundary conditions.

2.1 Discretization matrices and stabilization

To illustrate the matrices involved in the finite element discretizations of optimal control problems, consider a standard Galerkin finite element discretization for the (steady-state) convection–diffusion problem:

 −ϵ∇2y+w⋅∇y=u+f(x)inΩ. (5)

Letting be the same finite element basis functions for and , then we would like to find approximations , . Letting the vectors , , a discretized version of (5) is

 Ly:=(ϵK+N+W)y=Mu+f,

where (excluding the effect of boundary conditions):

 K={kij}nxi,j=1, kij=∫Ω∇ϕi⋅∇ϕjdΩ, N={nij}nxi,j=1, nij=∫Ω(w⋅∇ϕj)ϕidΩ, M={mij}nxi,j=1, mij=∫ΩϕiϕjdΩ, f={fi}nxi=1, fi=∫ΩfϕidΩ,

and the matrix denotes a possible stabilization matrix for the convection operator. For a heat control problem of the form (4), , and . We note that is generally referred to as a stiffness matrix, and is symmetric positive semi-definite (positive definite unless ), and is referred to as a mass matrix, which is symmetric positive definite. The matrix

is skew-symmetric (meaning

) in the case of Dirichlet problems; otherwise we obtain that

 nij+nji=∫∂ΩNϕiϕjw⋅nds, (6)

using Green’s theorem. In this latter case the spectral properties of the matrix , which can be indefinite, will be useful for our subsequent analysis. We note that , where denotes the angle between the wind and the (outer) unit normal vector on the boundary , so setting , we have that

 −c≤w⋅n≤c. (7)

Defining the matrix

 M∂ΩN={m∂ΩNij}nxi,j=1,m∂ΩNij=∫∂ΩNϕiϕjds,

we may then write

 v⊤(H+cM∂ΩN)v=nx∑i=1nx∑j=1vi[∫∂ΩN(w⋅n+c)ϕiϕjds]vj=∫∂ΩN(w⋅n+c)v2ds≥0

for any , where we set and use (7). Therefore,

 H⪰−cM∂ΩN, (8)

where the notation means is positive semi-definite. Hence, any negative eigenvalues of are dominated by a constant factor of those of the mass matrix related to the basis functions on . This observation will be useful when discussing our approach for problems with Neumann or mixed boundary conditions.

Concerning the stabilization scheme used for solving the forward convection–diffusion problem, a popular stabilized finite element method is the Streamline Upwind Petrov–Galerkin (SUPG) method [20]. However, as pointed out in [8, 16], applying this scheme to the (steady-state) control problem gives rise to extra difficulties. Specifically, applying the scheme to the control problem with the discretize-then-optimize strategy leads to symmetric discrete equations in which the discrete adjoint problem is not a consistent discretization of the continuous adjoint problem, whereas the optimize-then-discretize approach gives rise to a different, non-symmetric discretized system which therefore does not possess the structure of a discrete optimization problem. In this work, we thus employ the adjoint-consistent Local Projection Stabilization (LPS) approach described in [4, 5, 6, 34], for which the discretization and optimization steps commute in the stationary case. Further, this stabilized finite elements leads to an order of convergence of for the -error [5], which is optimal on general quasi-uniform meshes for the forward problem, see e.g., [49]. In the LPS formulation, the stabilization matrix is defined as

 W={wij}nxi,j=1,wij=δ∫Ω[w⋅∇ϕi−πh(w⋅∇ϕi)][w⋅∇ϕj−πh(w⋅∇ϕj)]dΩ. (9)

Here, denotes a stabilization parameter, and is an orthogonal projection operator. From (9), the matrix can be viewed as a shifted discrete diffusion operator associated with the streamline direction defined by . It is symmetric and positive semi-definite, as shown by following the working in [10, p. 17]: letting , and setting , , , we have

 v⊤Wv= δnx∑i=1nx∑j=1vi[∫Ω[w⋅∇ϕi−πih][w⋅∇ϕj−πjh]dΩ]vj = δ∫Ω(w⋅∇v−˜π)(w⋅∇v−˜π)dΩ=δ∥w⋅∇v−˜π∥2L2(Ω)≥0,

where we have used that .

For the convergence of the method, we require to be an -orthogonal (discontinuous) projection operator defined on patches of the domain that satisfies the approximation and stability properties specified in [5], where by a patch we mean the union of elements of our finite element discretization; the projection operator is left free to be discontinuous on the edges of the patches. In our implementation we will make use of elements, so the domain is divided into patches consisting of 2 elements in each dimension. To ensure the aforementioned properties are satisfied, as in [4] we define as

 πh(v)|P=1|P|∫PvdP,∀v∈L2(Ω),

where is the restriction of to the patch , and is the (Lebesgue) measure of the patch. We refer again to [5] for the theoretical proof of the convergence of this method with this definition of the local projection operator. As in [10, p. 253], we choose locally on each patch as , with

 δk=⎧⎪⎨⎪⎩hk2∥wk∥(1−1Pk)ifPk>1,0ifPk≤1,

where is the -norm of the wind at the patch centroid, is a measure of the patch length in the direction of the wind, and is the patch Péclet number.

We observe that, with this (non-constant) choice of , the matrix is still positive semi-definite. To prove this, it is sufficient to define and locally on each patch and then proceed as above, obtaining

 v⊤Wv=∑kδk∥w⋅∇v−˜πk∥2L2(Pk)≥0.

The spectral properties of the matrices , , and (whether Dirichlet, Neumann or mixed boundary conditions are imposed) will be useful later, when discussing the optimality of our preconditioning approach.

Informed by the definitions of this section, we make the following assumption when carrying out the theoretical analysis in the remainder of this paper, and later discuss how our methodology could be applied if the assumption is relaxed:

Assumption 1

When a convection-diffusion control problem is being solved we assume is such that on . We allow heat control problems to satisfy any boundary conditions of the form (2), and for convection–diffusion control we remove this restriction on the wind for a pure Dirichlet problem (i.e., ).

When Assumption 1 holds, the matrix . There are a number of wind vectors that satisfy the property above on the whole of , see for example the well-known ‘recirculating wind’ example described in [10, p. 240].

3 First-Order Optimality Conditions and Discretizations in Time

We now describe the strategy used for obtaining an approximate solution of (1)–(2). We apply an all-at-once approach coupled with the optimize-then-discretize scheme, in which the continuous Lagrangian is used to arrive at first-order optimality conditions, which are then discretized. For simplicity the working of this section considers Dirichlet boundary conditions, that is , but it may be readily extended to problems where . Introducing the adjoint variable (or Lagrange multiplier) , we consider the Lagrangian associated to (1)–(2) as in [5]. Then, by deriving the Karush–Kuhn–Tucker (KKT) conditions, the solution of (1)–(2) satisfies:

 (10)

where we have substituted the gradient equation into the state equation.

Problem (10) is a coupled system of (time-dependent) PDEs, consisting of a forward PDE combined with a backward problem for the adjoint. Due to this structure, when trying to find numerical approximation of the solution, an A–stable method is usually applied for discretizing the time derivative, since this will allow the user to choose a time step that is independent of the spatial mesh-size. Further, in order to obtain a consistent system of linear equations, both functions and are approximated at the same time points.

We now introduce methods for approximating the time derivative when solving (10), and describe the resulting matrix systems, starting with the widely-used backward Euler method in Section 3.1, followed by the Crank–Nicolson method in Section 3.2. For the remainder of the paper, we discretize the interval into subintervals of length , and we use the notation , for our approximations for all , with .

3.1 Backward Euler discretization

Many widely-used preconditioned iterative methods for the solution of time-dependent PDE-constrained optimization problems of type (1)–(2) involve a backward Euler discretization for the time variable [9, 32, 33, 42, 44, 47]. Applying this scheme gives the following system of equations111Note that the matrix arises in the discretization due to the skew-symmetry of , which holds as Assumption 1 is trivially satisfied for Dirichlet problems.:

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩τMyn+(LE)⊤pn−Mpn+1=τMˆyn,n=0,...,nt−1,pnt=0,y0=y0,−Myn+LEyn+1−τβMpn+1=τfn+1,n=0,...,nt−1,

where , is the discretization of the initial condition for , and

 fn={fni}nxi=1,fni=∫Ωf(x,tn)ϕidΩ. (11)

With this notation, we obtain the following (symmetric) linear system:

 (12)

where

 AE=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣τM⋱τM0⎤⎥ ⎥ ⎥ ⎥ ⎥⎦,BE=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣LE−MLE⋱⋱−MLE⎤⎥ ⎥ ⎥ ⎥ ⎥⎦,

and the right-hand side vectors and take into account the initial and final-time conditions on and , as well as information from the desired state and the force function . Note that we have symmetrized the system by replacing the discretized initial and final-time conditions with

 (LE)⊤pnt=0,LEy0=LEy0.

The structure of the previous system is very convenient from the point of view of numerical linear algebra because it facilitates effective preconditioning.

3.2 Crank–Nicolson discretization and symmetrization of the system

Despite the convenient structure of the matrix system arising from the backward Euler method, the essential drawback is that the method is only first-order accurate in time. For instance, if a second-order accurate method is used for the spatial discretization, the numerical error is of order , given sufficient smoothness properties of the solution. Therefore, it is reasonable to choose , resulting in a matrix system of huge dimension, and hence an extremely high CPU time is required for its solution. In recent years, a significant effort has been invested in improving the accuracy of the discretized solution of time dependent PDE-constrained optimization problems involving the backward Euler method: see [14] for an application of deferred correction to time-dependent PDE-constrained optimization problems, [9, 41, 42, 43]

for low-rank tensor approaches to speed up the convergence of backward Euler,

[17] for a multigrid technique applied to optimal flow control problems, and [48] for a preconditioned iterative solver for problem of the type (3)–(4) that uses a reduction in dimensionality of the arising system.

We also highlight recent works in which higher-order time discretizations are considered. For example, in [13] the authors derive a parallel-in-time algorithm coupled with a gradient-based method, using a Lobatto IIIA scheme in the time variable, and in [9] the authors note that their low-rank method can also be adopted to the Crank–Nicolson approach. Other valuable approaches for addressing time-dependent PDE-constrained optimization problems include multiple shooting methods [15], parareal schemes [23, 24], and ideas from instantaneous control [7, 18]. However, to the best of our knowledge, the crucial question of preconditioning PDE-constrained optimization methods by exploiting the precise matrix structures arising from higher-order discretization schemes in time has not been resolved, perhaps due to the increased complexity in the structure of the matrix systems, an issue pointed out in [17] for instance.

Remark 1

We note here that both backward Euler and Crank–Nicolson are unconditionally A–stable, that is, no restriction on and is required. There are often other stability properties to consider when selecting an appropriate time-stepping scheme: for instance Crank–Nicolson, as opposed to backward Euler, is not L–stable, which is an important consideration when applying long-range integrations. In this work, we do not address specific questions about stability properties of the two methods, but rather whether it is possible in principal to take advantage of a higher-order discretization scheme in the time variable for increasing the rate of convergence of solvers for the optimal control of time-dependent PDEs. It is likely that our proposed method could be extended to time-stepping methods that achieve even faster convergence than Crank–Nicolson, including L–stable methods. We highlight the reference [2], which considers the question of preconditioning matrix systems arising from L–stable methods for forward evolutionary PDEs.

Remark 2

We note that, in order for Crank–Nicolson to achieve a second-order convergence rate, we require the state and the adjoint variable in (10) to have sufficient smoothness. Therefore, in the following analysis, we allow the assumptions of regularity and the results in [1, Sec. 3] to hold for the problem considered.

Considering again (10), we now discretize the time derivative using the Crank–Nicolson method. Denoting

 L+=τ2L+M,L−=τ2L−M,¯M=τ2M,¯Mβ=τ2βM,

we have that the numerical solution of (10) satisfies

 ⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩¯M(yn+yn+1)+(L+)⊤pn+(L−)⊤pn+1=¯M(ˆyn+ˆyn+1),n=0,...,nt−1,L−yn+L+yn+1−¯Mβ(pn+pn+1)=τ2(fn+fn+1),n=0,...,nt−1,

with and an appropriate discretization of the final and initial conditions on and , and defined as in (11). In matrix form, we write

 (13)

where the vectors and are the numerical solution for the state and adjoint variables, and as before the right-hand side accounts for the initial and final-time conditions on and , as well as the desired state and force function . The matrices , , are given by

 ¯Λ11=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣¯M¯M⋱⋱¯M¯M0⎤⎥ ⎥ ⎥ ⎥ ⎥⎦,¯Λ12=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣(L+)⊤(L−)⊤⋱⋱(L+)⊤(L−)⊤M⎤⎥ ⎥ ⎥ ⎥ ⎥⎦, ¯Λ21=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣ML−L+⋱⋱L−L+⎤⎥ ⎥ ⎥ ⎥ ⎥⎦,¯Λ22=−⎡⎢ ⎢ ⎢ ⎢ ⎢⎣cccc0¯Mβ¯Mβ⋱⋱¯Mβ¯Mβ⎤⎥ ⎥ ⎥ ⎥ ⎥⎦.

We immediately realize that the matrices and are not symmetric, and therefore no iterative method for symmetric matrices, such as MINRES, may be applied to (13). We therefore wish to apply a transformation to (13) to convert it to a symmetric system. We partition the matrix in (13) as

 \par\par⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣¯M¯M(L+)⊤(L−)⊤00¯M¯M(L+)⊤(L−)⊤⋱⋱⋱⋱¯M¯M(L+)⊤(L−)⊤0MM0L−L+−¯Mβ−¯Mβ0L−L+−¯Mβ−¯Mβ⋱⋱⋱⋱L−L+−¯Mβ−¯Mβ\par⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦.

Then if we eliminate the initial and final-time conditions on and , we can rewrite

 \par\par[Λ11Λ12\parΛ21−Λ22\par]\par\par[yp]=[b1b2],

with the vectors , , , modified accordingly, and the matrices , , given by

 Λ11=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣¯M¯M¯M⋱⋱¯M¯M⎤⎥ ⎥ ⎥ ⎥ ⎥⎦Λ12=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣(L+)⊤(L−)⊤⋱⋱(L+)⊤(L−)⊤(L+)⊤⎤⎥ ⎥ ⎥ ⎥ ⎥⎦, Λ21=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣L+L−L+⋱⋱L−L+⎤⎥ ⎥ ⎥ ⎥ ⎥⎦=Λ⊤12,Λ22=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣¯Mβ¯Mβ⋱⋱¯Mβ¯Mβ¯Mβ⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦. (14)

In order to symmetrize the system, we now apply the following linear transformation:

 T=[T100T2],T1=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣II⋱⋱III⎤⎥ ⎥ ⎥ ⎥ ⎥⎦,T2=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣III⋱⋱II⎤⎥ ⎥ ⎥ ⎥ ⎥⎦=T⊤1, (15)

where , and

denotes the identity matrix. Then,

 T[Λ11Λ12Λ21Λ22][yp]=[AB⊤B−C]A[yp]=T[b1b2], (16)

where

Furthermore, it holds that

 B=T2Λ21=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣L+τLL+L−τLL+⋱⋱⋱L−τLL+⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦=(T1Λ12)⊤.

We have thus transformed (13) to a symmetric system using matrices , which are easy and cheap to apply and invert. We also highlight the further properties that

where

 AD=⎡⎢ ⎢⎣¯M⋱¯M⎤⎥ ⎥⎦,CD=⎡⎢ ⎢ ⎢⎣¯Mβ⋱¯Mβ⎤⎥ ⎥ ⎥⎦, (18)

so we may work with the matrices and cheaply, using , , , . Further, since both and are symmetric positive definite, the same holds for and .

So, in order to find an approximate solution to (10), we may now consider the saddle-point system (16), to which we can apply a preconditioned Krylov subspace method for symmetric indefinite matrices, such as MINRES.

4 Preconditioning Approach

In this section we describe an optimal preconditoner for the system (16), by making use of saddle-point theory as well as suitable approximations for the blocks of the matrix .

We know that if we wish to solve an invertible system of the form (16), with invertible , a good candidate for a preconditioner is the block diagonal matrix:

 P=[A00S], (19)

where denote the (negative) Schur complement . Indeed, if and is also invertible, the preconditioned matrix will have only 3 distinct eigenvalues [21, 25]:

 λ(P−1A)={1,1±√52},

where we denote the set of eigenvalues of a matrix by . If instead and are symmetric positive definite, it holds that (see [3, 40] and [31, Theorem 4]):

 λ(P−1A)∈[−1,12(1−√5)]∪[1,12(1+√5)].

In these cases, a preconditioned Krylov subspace method for symmetric indefinite matrices with preconditioner (19) would converge in few iterations (at most 3 if ). We further observe that, if and are symmetric positive definite, the preconditioner also has that property, so a natural choice of iterative solver is the MINRES algorithm. Of course, as in (19) the preconditioner is not practical, as the exact inverses of and will need to be applied in each case. Applying is particularly problematic as even when and are sparse, is generally dense. For this reason, we wish to find a suitable approximation of , with

 ˆP=[ˆA00ˆS],

or, more precisely, a cheap application of the effect of on a generic vector. In the following section we commence by finding a good approximation of the inverse of the matrix , then in Section 4.3 we describe the approach used for approximating the Schur complement .

Since we will benchmark our new preconditioning strategy against the preconditioner derived in [33] for heat control (i.e., ) after applying the optimize-then-discretize approach with backward Euler time-stepping, we briefly describe the preconditioner for the system (12), itself also a saddle-point system. In this case the -block is not invertible; a good preconditioner is found to be

 ˆPE=⎡⎣ˆAE00ˆSE⎤⎦,

where

 ˆAE=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣τM⋱τMξτM⎤⎥ ⎥ ⎥ ⎥ ⎥⎦,ˆSE=(BE+ˆME)(ˆAE)−1(BE+ˆME)⊤, (20)

with

 ˆME=τ√β⎡⎢ ⎢ ⎢ ⎢ ⎢⎣M⋱M√ξM⎤⎥ ⎥ ⎥ ⎥ ⎥⎦. (21)

Here,

is chosen such that the (invertible) matrix

is ‘close enough’ to the matrix in some sense. The -block is approximated by a ‘perturbed’ matrix , whose inverse is then applied within the matrix . A term of the form (21) is justified in Section 4.3.

4.2 Approximation of the (1,1)-block

We now focus on devising a preconditioner for the matrix system (16), arising from a Crank–Nicolson discretization, starting with a cheap and effective approximation of the matrix . We recall from (17) that can be written as , with defined as in (18). We observe that is a block diagonal matrix with each diagonal block a multiple of . Therefore, in order to obtain a cheap approximation for , we require a suitable way to approximate the inverse of a mass matrix. As discussed in [11, 12, 46], the Chebyshev semi-iterative method represents a cheap and effective way to do this. From the discussion above, a good approximation of is therefore given by , with

where denotes a fixed number of steps (20, in our tests) of Chebyshev semi-iteration applied to . Further, since and are (block) upper- and lower-triangular matrices respectively (which makes them very easy to invert), we use backward and forward (block-)substitution in order to apply their inverses.

It is trivial to prove the following, on the effectiveness of our approximation of : The minimum (maximum) eigenvalue of is bounded below (above) by the minimum (maximum) eigenvalue of .

4.3 Approximation of Schur complement

We now find a suitable approximation for the Schur complement of (16). In the forthcoming theory, we assume that Assumption 1 holds, and later discuss the case where this is relaxed. Recalling how the matrices , , and can be written as linear transformations involving the matrices and , we first rewrite in the following way:

where we set

 BD=Λ21T−12. (23)

It is hence clear that if we find a symmetric positive definite approximation of

 Sint=CD+BDA−1DB⊤D, (24)

then is a symmetric positive definite approximation of . We may therefore use the (generalized) Rayleigh quotient in order to find an upper and lower bounds for the eigenvalues of the matrix as follows. Let be an eigenvalue of with

the corresponding eigenvector; then

 ˆS−1Sx=λx⇒Sx=λˆSx⇒λ=x⊤Sxx⊤ˆSx=˜x⊤Sint˜x˜x⊤˜Sint˜x, (25)

where we set and use (22) together with the definition of . Thus upper and lower bounds for the eigenvalues of are given by the maximum and the minimum eigenvalues of , respectively.

From (22), and due to the convenient structure of the matrices and in (18), we can devise an approximation of using the ‘matching strategy’ discussed in [33, 35] as follows. We seek an approximation:

 ˜Sint=(Λ21+ˆM)A−1(Λ21+ˆM)⊤≈Sint (26)

such that ‘captures’ both terms of , namely and (noting that ). We therefore wish that

 (27)

Using the definitions of and from (18), we obtain that for this to hold the matrix is given by

 (28)

and therefore

 ˆM=τ2√β⎡⎢ ⎢ ⎢ ⎢ ⎢⎣MMM⋱⋱MM⎤⎥ ⎥ ⎥ ⎥ ⎥⎦. (29)

Finally, our approximation of is given by

 ˆS=T2(Λ21+ˆM)A−1(Λ21+ˆM)⊤T⊤2=(Λ21+ˆM)A−1D(Λ21+ˆM), (30)

with as defined in (29), and the two expressions are equivalent since commutes with both and . To understand the effectiveness of this approximation, we recall (25), telling us that we only need to study the spectrum of the matrix . We next rewrite as follows:

 ˜Sint= (Λ21+ˆM)A−1(Λ21+ˆM)⊤ = ˆMA−1ˆM⊤+Λ21A−1Λ⊤21+ˆMA−1Λ⊤21+Λ21A−1ˆM⊤ = CD+BDA−1DB⊤D+MDA−1DB⊤D+BDA−1DM⊤D = Sint+MDA−1DB⊤D+BDA−1DM⊤D, (31)

where we have used (27), (17), (23), and (28) in turn.

Since and are symmetric positive definite, we again consider the generalized Rayleigh quotient:

 R:=x⊤Sintxx⊤˜Sintx=a⊤a+b⊤ba⊤a+b⊤b+a⊤b+b⊤a, (32)

where and , noting from (18) and (28) that . As , we have

 R≥12  ⇔  a⊤a+b⊤b≥12(a⊤a+b⊤b+a⊤b+b⊤a)  ⇔  12(a−b)⊤(a−b)≥0,

which is clearly satisfied since .

In order to find an upper bound for the Rayleigh quotient (32), we return to (31). Noting that

 MDA−1D=1√