Parameter-Robust Preconditioning for Oseen Iteration Applied to Stationary and Instationary Navier–Stokes Control

07/31/2021 ∙ by Santolo Leveque, et al. ∙ 0

We derive novel, fast, and parameter-robust preconditioned iterative methods for steady and time-dependent Navier–Stokes control problems. Our approach may be applied to time-dependent problems which are discretized using backward Euler or Crank–Nicolson, and is also a valuable candidate for Stokes control problems discretized using Crank–Nicolson. The key ingredients of the solver are a saddle-point type approximation for the linear systems, an inner iteration for the (1,1)-block accelerated by a preconditioner for convection–diffusion control, and an approximation to the Schur complement based on a potent commutator argument applied to an appropriate block matrix. A range of numerical experiments validate the effectiveness of our new approach.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Optimal control problems with PDE constraints have received increasing research interest of late, due to their applicability to scientific and industrial problems, and also due to the difficulties arising in their numerical solution (see [18, 41] for excellent overviews of the field). An example of a highly challenging problem attracting significant attention is the (distributed) control of incompressible viscous fluid flow problems. Here, the constraints may be the (non-linear) incompressible Navier–Stokes equations or, in the limiting case of viscous flow, the (linear) incompressible Stokes equations. The control of the Navier–Stokes equations is of particular interest: due to the non-linearity involved, to find a solution linearizations of the constrained problem need to be repeatedly solved until a sufficient reduction on the non-linear residual is achieved [16, 17, 35]. This has motivated researchers to devise solvers for this type of problem which exhibit robustness with the respect to all the parameters involved; see [17] for a robust multigrid method applied to Newton iteration for instationary Navier–Stokes control, for instance. Despite the recent development of parameter-robust preconditioners for the control of the (stationary and instationary time-periodic) Stokes equations [2, 22, 37, 43], to our knowledge no such preconditioner has proved to be completely robust when applied to the Navier–Stokes control problem considered here. We also point out [15] for a preconditioned iterative solver for Stokes and Navier–Stokes boundary control problems, and [36] for an efficient and robust preconditioning technique for in-domain Navier–Stokes control.

A popular preconditioner for the Oseen linearization of the forward stationary Navier–Stokes equations combines saddle-point theory with a commutator argument for approximating the Schur complement [21]. This type of preconditioner shows only a mild dependence on the viscosity parameter, and is robust with respect to the discretization parameter. In [32] the combination of saddle-point theory with a commutator argument has also been adapted to the control of the stationary Navier–Stokes equations; we note that a commutator argument of this type was first introduced in [31] for the control of the stationary Stokes equations. In this work, we utilize a commutator argument for a block matrix in conjunction with saddle-point theory in order to derive robust preconditioners for the optimal control of the incompressible Navier–Stokes equations, in both the stationary and instationary settings. For instationary problems our approach leads to potent preconditioners when either the backward Euler or Crank–Nicolson scheme is used in the time variable, and also leads to a preconditioner for the instationary Stokes control problem using Crank–Nicolson.

This article is structured as follows. In Section 2, we define the problems that we examine, that is the stationary and instationary Navier–Stokes control problems along with instationary Stokes control; we then present the linearization adopted in this work, and outline the linear systems arising upon discretization of the forward problem. In Section 3, we introduce the saddle-point theory used to devise optimal preconditioners, giving as an example a preconditioner for the forward stationary Navier–Stokes equation in combination with the commutator argument presented in [21]; the latter will then be generalized when multiple differential operators are involved in the system of equations. In Section 4, we derive the first-order optimality conditions of the control problems and their discretization. In Section 5, we present our suggested preconditioners, and in particular the commutator argument applied to the Schur complements arising from the control problems. Then, in Section 6 we provide numerical results that show the robustness and efficiency of our approach.

2 Problem Formulation

In this work we derive fast and robust preconditioned iterative methods for the distributed control of incompressible viscous fluid flow; in this case, the physics is described by the (stationary or instationary) incompressible Navier–Stokes equations. The corresponding distributed control problem is defined as a minimization of a least-squares cost functional subject to the PDEs.

Specifically, given a spatial domain , , the stationary Navier–Stokes control problem we consider is

(1)

subject to

(2)

where the state variables and denote velocity and pressure respectively, is the desired state (velocity), and is the control variable. Further, is a regularization parameter, and is the viscosity of the fluid. The functions and are known.

Similarly, the control of the instationary Navier–Stokes equations is defined as

(3)

given also a final time , subject to

(4)

using the same notation as above. As for the stationary case, the functions and are known; the initial condition is also given. In the following, we assume that is solenoidal, i.e. , adapting our strategy to the general case when possible.

The constraints (2) and (4) are a system of non-linear (stationary or instationary) PDEs. In order to obtain a solution of the corresponding control problem, we make use of the Oseen linearization of the non-linear term , as in [35].

If the non-linear term is dropped in (2) or (4), with , we obtain the corresponding distributed Stokes control problem. Many parameter-robust preconditioners for the stationary Stokes control problem have been derived in the literature [37, 43]; however, less progress has been made towards the parameter-robust solution of instationary Stokes control problems, except in the time-periodic setting [2, 22]. Below, we derive a preconditioner that will also result in a robust solver for the general formulation of the instationary distributed Stokes control problem, defined as the minimization of the functional (3) subject to the instationary Stokes equations.

2.1 Non-linear iteration and discretization matrices

To introduce the linearization adopted for the control case as well as the discretization matrices, we consider the stationary Navier–Stokes equations:

(5)

with on . First, we introduce the weak formulation of (5) as follows. Let , , and , with the Sobolev space of square-integrable functions in with square-integrable weak derivatives; then, the weak formulation reads as:

Find and such that

(6)

where is the -inner product on . The main issue in (6) is how to deal with the non-linear term . A common strategy employs the Picard iteration, which is described as follows. Given the approximations and to and respectively, we consider the non-linear residuals:

(7)

for any and . Then, the Picard iteration is defined as [10, p. 345–346]:

(8)

where and are the solutions of

(9)

for any and . Equations (9) are the Oseen equations for the forward stationary Navier–Stokes equations. These are posed on the continous level, so in order to find a solution to (5) we now discretize them. Before doing so, we note that (9

) represents an incompressible convection–diffusion equation, with wind vector defined by

, and it is clear that, for , the problem is convection-dominated. This requires us to make use of a stabilization procedure.

Letting and be an inf–sup stable finite element basis functions for the Navier–Stokes equations, we seek approximations , , . Denoting the vectors , , , a discretized version of (7) is:

where we set , with

and the matrix denotes a possible stabilization matrix for the convection operator. Then, the Picard iterate (8) may be written in discrete form as

with and solutions of

(10)

The matrix is generally referred to as a (vector-)stiffness matrix, and the matrix is referred to as a (vector-)mass matrix; both the matrices are symmetric positive definite (s.p.d.). The matrix is referred to as a (vector-)convection matrix

, and is skew-symmetric (i.e.

) if the incompressibility constraints are solved exactly; finally, the matrix is the (negative) divergence matrix.

Regarding the stabilization procedure applied, we note that the matrix

represents a differential operator that is not physical, and is introduced only to enhance coercivity (that is, increase the positivity of the real part of the eigenvalues) of the discretization, thereby allowing it to be stable. For the reasons discussed in

[23, 34], in the following we employ the Local Projection Stabilization (LPS) approach described in [3, 4, 7]. We point out [24], where the authors derive the order of convergence of one- and two-level LPS applied to the Oseen problem. For other possible stabilizations applied to the Oseen problem, see [8, 11, 12, 20, 40].

In the LPS formulation, the stabilization matrix is defined as

(11)

Here, denotes a stabilization parameter, and is the fluctuation operator, with Id the identity operator and an -orthogonal (discontinuous) projection operator defined on patches of , where by a patch we mean a union of elements of our finite element discretization. In our implementation, the domain is divided into patches consisting of two elements in each dimension. Further, we define the stabilization parameter and the local projection as in [23, Sec. 2.1].

3 Saddle-Point Systems

In this section we introduce saddle-point theory and the commutator argument derived in [21] for the forward stationary Navier–Stokes equations. These will be the main ingredients for devising our preconditioners.

Given an invertible system of the form

(12)

with invertible, a good candidate for a preconditioner is the block triangular matrix:

(13)

where denote the (negative) Schur complement . Indeed, if is also invertible and denoting the set of eigenvalues of a matrix by , we have , see [19, 25]. Since the preconditioner is not symmetric, we need to employ a Krylov subspace method for non-symmetric matrices, such as GMRES [39]. Clearly, we do not want to apply the inverse of as defined in (13), as the computational cost would be comparable to that of applying the inverse of . In particular, applying would be problematic, as even when and are sparse is generally dense. For this reason, we consider a suitable approximation:

(14)

of , or, more precisely, a cheap application of the effect of on a generic vector. For instance, an efficient preconditioner for the matrix arising from (10) is given by (14), with being the approximation of using a multigrid routine, and the so called pressure convection–diffusion preconditioner [10, p. 365–370] (first derived in [21]) for . The latter is derived by mean of a commutator argument as follows. Consider the convection–diffusion operator defined on the velocity space as in (9), and suppose the analogous operator on the pressure space is well defined. Suppose also that the commutator

(15)

is small in some sense. Then, discretizing (15) with stable finite elements leads to

where is the discretization of in the finite element basis for the pressure, with , , , and the (scalar) mass, stiffness, convection, and stabilization matrices, respectively, in the pressure finite element space. As above, , and as well as are defined as in (11). Pre- and post-multiplying by and , the previous expression then gives

We still have no practical preconditioner due to the matrix ; however, it can be proved that for problems with enclosed flow [10, p. 176–177]. Finally, a good approximation of the Schur complement is

Note that in our derivation we have also included the stabilization matrices on the velocity and the pressure space, which was not done in [21].

In the following we present a generalization of the pressure convection–diffusion preconditioner, applying the commutator argument in (15) to the case where the differential operators involved are to be considered vectorial differential operators, i.e.

(16)

where

Here is a differential operator on the velocity space with the corresponding differential operator on the pressure space, for , and , with

the identity matrix for some

. As above, we suppose that each , is well defined, and that the commutator is small in some sense. Again, after discretizing with stable finite elements we can rewrite

(17)

where , , , and

with and the corresponding discretizations of and , respectively. Pre-multiplying (17) by , and post-multiplying by , gives that

Noting that and recalling that , we derive the following approximation:

(18)

where . In Section 5.2 we employ the approach outlined here to devise preconditioners for discrete optimality conditions of Navier–Stokes control problems.

4 First-Order Optimality Conditions and Time-Stepping

We now describe the strategy used for obtaining an approximate solution of (1)–(2) and of (3)–(4). We introduce adjoint variables (or Lagrange multipliers) and and make use of an optimize-then-discretize scheme, stating the first-order optimality conditions. We then discretize the conditions so obtained, for both the stationary and instationary Navier–Stokes control problems, and derive the corresponding Oseen linearized problems. For the instationary problem (3)–(4), we consider employing both backward Euler and Crank–Nicolson schemes in time. Both time-stepping schemes are A–stable, hence avoiding any constraints on the time step used. While only first-order accurate, backward Euler is also L–stable, and the technique presented below is easily generalized when the initial condition is not solenoidal. On the other hand, Crank–Nicolson is not L–stable, but is second-order accurate. However, if is not solenoidal, pre-processing is required in order to write the Oseen iteration.

4.1 Stationary Navier–Stokes control

Introducing the adjoint velocity and the adjoint pressure , we may consider the Lagrangian associated with (1)–(2) as in [35], and write the Karush–Kuhn–Tucker (KKT) conditions as:

(19)

where we have substituted the gradient equation into the state equation.

Problem (19) is a coupled system of non-linear, stationary PDEs. In order to find a numerical solution of (19), we need to solve a sequence of linearizations of the system. As in [35], we solve at each step the Oseen approximation as follows. Letting be the current approximations to , , , and , respectively, with defined as in Section 2.1, the Oseen iterate is defined as

(20)

with , , , the solutions of the following Oseen problem:

(21)

for any and . The residuals are given by

The Oseen problem (21) is posed on the continuous level, so we need to discretize it in order to obtain a numerical solution of (1)–(2). Let be the vectors containing the numerical solutions at the -th iteration for , , , and , respectively, that is, , , , . Then, the (discrete) Oseen iterate is defined as

where

(22)

with , , and the discrete residuals given by

Here is the vector corresponding to the discretized desired state , and . In our tests, the initial guesses and for the non-linear process are the state and adjoint velocity solutions of the KKT conditions for the corresponding stationary Stokes control problem, with discretization given by (22) with , and residuals , , . Note that the right-hand side may also take into account boundary conditions (as done in our implementation).

In matrix form, we rewrite system (22) as

(23)

where

(24)

The matrix is of saddle-point type; however, since the incompressibility constraints are not solved exactly, is not symmetric in general.

4.2 Instationary Navier–Stokes control

We now state the KKT conditions for the instationary problem (3)–(4). As before, introducing the adjoint variables and , we consider the Lagrangian associated to (3)–(4) as in [41, p. 318]. Then, by deriving the KKT conditions and substituting the gradient equation into the state equation, the solution of (3)–(4) satisfies:

(25)

Problem (25) is a coupled system of non-linear, instationary PDEs. In order to find a numerical solution of (25), as for the stationary case we take an Oseen linearization. Let be the current approximation to , with , and the corresponding space for the adjoint velocity (see [41, p. 315–321] and [18, p. 88–95], for instance, for the case ). Then, the Oseen iterate is of the form (20), with the solution of:

(26)

for any and . The residuals are given by

(27)

Note that with this notation in , and on .

Equations (26) are the Oseen equations for instationary Navier–Stokes control, involving a coupled system of instationary convection–diffusion equations and divergence-free conditions. Due to this structure, we present two discretized version of (26), one making use of backward Euler time-stepping, the other employing the Crank–Nicolson scheme. Further, in order to solve (26) we need to choose an initial guess and for the state and the adjoint velocities and then iteratively solve a sequence of linearized problems. In our tests, and are again the (velocity) solutions of the KKT conditions for the corresponding Stokes control problem, for which the optimality conditions are:

along with the incompressibility constraints together with initial, final, and boundary conditions on the state and adjoint velocities as in (25).

We now derive the linear systems resulting from the time-stepping schemes. For the sake of exposition, we introduce the matrices , ,