Preconditioning immersed isogeometric finite element methods with application to flow problems

08/11/2017 ∙ by Frits de Prenter, et al. ∙ TU Eindhoven 0

Immersed finite element methods generally suffer from conditioning problems when cut elements intersect the physical domain only on a small fraction of their volume. De Prenter et al. [Computer Methods in Applied Mechanics and Engineering, 316 (2017) pp. 297-327] present an analysis for symmetric positive definite (SPD) immersed problems, and for this class of problems an algebraic preconditioner is developed. In this contribution the conditioning analysis is extended to immersed finite element methods for systems that are not SPD and the preconditioning technique is generalized to a connectivity-based preconditioner inspired by Additive-Schwarz preconditioning. This Connectivity-based Additive-Schwarz (CbAS) preconditioner is applicable to problems that are not SPD and to mixed problems, such as the Stokes and Navier-Stokes equations. A detailed numerical investigation of the effectivity of the CbAS preconditioner to a range of flow problems is presented.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 16

page 17

page 19

page 22

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Immersed finite element methods have been demonstrated to have great potential for problems that are posed on domains for which traditional (mesh-fitting) meshing techniques encounter problems, such as prohibitively large meshing costs or the necessity for manual intervention. In recent years, immersed methods – such as the finite cell method Parvizian2007 , CutFEM Burman2015 and immersogeometric analysis Kamensky2015 – have been a valuable companion to isogeometric analysis Hughes2005 as they enable computations on volumetric domains based on the availability of merely a CAD surface representation Schillinger2012 ; Rank2012 or voxelized geometries Schillinger2015 . Additionally, in the context of isogeometric analysis, immersed techniques can be considered as a natural way to incorporate CAD trimming curves in the design-through-analysis cycle Ruess2013 ; Ruess2014 ; Guo2015 ; Guo2017 .

A disadvantage of immersed finite element methods is that they can result in severely ill-conditioned matrices when the system contains elements that only intersect the physical domain on a small fraction of their volume, see, e.g., Burman2010 ; Sanches2011 ; BurmanHansbo2012 ; Massing2014 ; Rueberg2014 ; Schillinger2015 ; Dettmer2016 ; Lehrenfeld2016 ; SIPIC . We note that ill-conditioning effects due to small volume fractions are not exclusive to finite element methods, and also occur for immersed finite volume methods, see e.g., Johansen1998 . In SIPIC

we have analyzed these conditioning problems for partial differential equations that lead to symmetric and coercive variational forms. In particular we considered isogeometric discretizations of the Poisson problem and linear elasticity, both with symmetrically imposed Dirichlet conditions by means of Nitsche’s method

Nitsche1971 . Based on this analysis the SIPIC (Symmetric Incomplete Permuted Inverse Cholesky) preconditioner was developed as an algebraic preconditioning technique tailored to the above-mentioned class of partial differential equations.

The SIPIC preconditioner exhibits excellent behavior for symmetric and coercive variational forms – and therefore to Symmetric Positive Definite (SPD) matrices – which is the class of problems to which it is restricted. Besides this restriction to SPD problems, a drawback of SIPIC is that its robustness can deteriorate for high-order bases with low regularity such as -FEM bases (e.g., Duester2017 ).

In the present work we show that SIPIC can be interpreted as a sparse Additive-Schwarz preconditioner (e.g., Smith1996 ; Toselli2005 ) and present a generalization exploiting the connectivity data of the basis. This Connectivity-based Additive-Schwarz (CbAS) preconditioner does not suffer from the restrictions of SIPIC and is robust for high-order bases with low regularity. Additionally, we present a method based on the Schur complement to efficiently apply the CbAS preconditioner to mixed variational forms. These developments broaden the range of applications and, most notably, open the doors to applications in flow problems. Flow problems on immersed grids have been studied for decades, see e.g., the pioneering work in Peskin1972 ; Peskin1977 and the more recent review article Peskin2002 . Recent work on immersed flow problems involves numerous applications, such as: flows around complex (CAD) objects Bazilevs2012 ; Xu2015 ; Hsu2016 ; fluid-structure interaction with large deformations Burman2014 ; Massing2014 ; Wang2017 ; Wu2017 ; Schott2014 ; Schott2016 ; Kadapa2017 ; multiphase flows Schott2015 ; topology optimization Villanueva2017 and flow problems on scanned domains such as, e.g., the imbibition of porous media or biomechanical applications Hsu2014 ; Kamensky2015 ; Hsu2015 . However, robust preconditioning of such problems is still an open research topic. We note that alternative broadly applicable approaches to resolving the ill-conditioning of immersed problems exist, of which the ghost penalty Burman2010 ; BurmanHansbo2012 – which is customary for methods referred to as CutFEM Burman2015 – and basis function manipulation Hoellig2001 ; Hoellig2005 ; Rueberg2012 ; Rueberg2014 ; Rueberg2016 are particular noteworthy. Also alternative preconditioners have been developed, such as Menk2011 ; Lehrenfeld2016 ; Badia2017 . Similar to SIPIC these preconditioners are, however, restricted to linear elasticity and the Poisson problem however.

The CbAS preconditioner developed herein is tested for a range of problems of increasing complexity: a Poisson problem with boundary conditions imposed by the symmetric and nonsymmetric Nitsche method; an SUPG-stabilized convection-dominated convection-diffusion problem with the (standard) symmetric Nitsche method; a Stokes flow with the symmetric Nitsche method; and the steady and transient incompressible Navier-Stokes equations, also with the symmetric Nitsche method. These four problems enable investigation of the conditioning of symmetric and nonsymmetric single-field and two-field immersed problems. The symmetric and nonsymmetric Nitsche methods for the Poisson problem furthermore enable a comparison between the conditioning properties of both methods.

In Section 2 the abstract problem formulation is presented along with an analysis of the conditioning of unfitted finite element methods. This section also briefly reviews the concepts underlying the SIPIC preconditioner developed in SIPIC , which are essential for extending the preconditioner to non-SPD and mixed systems. The CbAS preconditioning technique we propose here is described in Section 3. In Section 5 we show the numerical results of the preconditioner, and concluding remarks are given in Section 5.

2 Problem formulations and conditioning analysis

In Section 4 we introduce the definitions and formulations that will be used throughout this manuscript, after which we present an analysis of the conditioning of unfitted finite element methods in Section 2.2. The essential aspects of the algebraic SIPIC preconditioning scheme developed in SIPIC , including its main restrictions, are reviewed in Section 2.3.

2.1 Formulations and definitions

We consider single-field and mixed partial differential equations posed on a -dimensional domain , which we refer to as the physical domain. The boundary is partitioned in the complementary parts and ( and ) for the imposition of Dirichlet and Neumann boundary conditions, respectively. The physical domain is encapsulated by a geometrically simple domain

, on which a tensor product grid can be defined (Figure 

1). Herein we consider uniform grids, but the condition number analysis and proposed preconditioning technique are not restricted to this. On the encapsulating domain we define multivariate isogeometric B-spline bases , which are tensor products of univariate B-spline bases of degree and regularity Hughes2005 ; Cottrell2009 . Note that, in principle, both and can be different per spatial direction and can even vary locally. In this work we restrict ourselves to the same global and for every spatial direction for simplicity however. The restriction of to basis functions supported on is denoted by the basis . The span of basis forms the finite dimensional function space in which we approximate the solution to the single-field problems. The solution to two-field mixed problems is approximated in , which comprises separate bases for the function spaces and . These spaces are generally naturally equipped with inner products, corresponding to the problem under consideration.

Figure 1: A geometrically complex domain that is encapsulated by a fictitious domain resulting in a rectilinear embedding domain that is simple to discretize by a tensor product grid.

The variational form of the single-field problems we consider herein is written as:

(1)

For two-field mixed problems the variational form reads:

(2)

In these formulations and are bounded bilinear and linear forms on and , respectively. In the examples in Section 5 we have and . We consider variational problems that are out of the scope of SIPIC , in the sense that the discretization of (1) does not result in a symmetric positive definite (SPD) matrix and that the discretization of (2) results in a, possibly nonsymmetric, mixed and therefore indefinite matrix.

In order to compute the solution to (1

) we define the vector

consisting of all basis functions in and spanning . For every function there exists a unique vector such that . We can therefore condense (1) into the linear algebraic system:

(3)

in which , and . Similarly we define spanning and spanning to represent the solution to (2). This yields the linear system:

(4)

in which , , , , , and .

2.2 Conditioning of immersed finite element methods

We briefly review the definitions and properties related to conditioning and iterative solvers that are essential for the ensuing developments. The reader is referred to, e.g., Saad and references therein for more detailed information.

When solving a linear system of the form (3) or (4), the condition number is an important property of the matrix . The Euclidean (2-norm) condition number is defined as:

(5)

with induced Euclidean norm:

(6)

Small condition numbers indicate a well-conditioned system. Iterative solvers generally display more efficient convergence behavior for well-conditioned systems. In iterative solvers, the (unknown) approximation error

is also better estimated by the (known) residual

for well-conditioned systems. SPD systems are frequently solved by the Conjugate Gradient (CG) method. The convergence of this method directly depends on the condition number. For systems that are not SPD, the iterative method of choice is the Generalized Minimal RESidual (GMRES) method Saad

. The convergence of GMRES does not just depend on the condition number, but a more direct dependence exists on the positioning of the eigenvalues in the complex plane. As a measure of performance of GMRES we consider the eigenvalue ratio:

(7)

with and the eigenvalues with the largest and smallest magnitudes. It should be noted that for symmetric matrices and that for nonsymmetric matrices .

Immersed finite element methods are known to yield severely ill-conditioned systems when the system contains elements that only intersect the physical domain on a small fraction of their volume, i.e., cut elements with small volume fractions. The volume fraction of element (Figure 1) is defined as the fraction of the element that intersects the physical domain :

(8)

The smallest volume fraction in the system is denoted by .

From (5) and (6) it follows that the condition number depends on the following maxima:

(9)
(10)

with . The norm and (in magnitude) largest eigenvalue of are generally unaffected by the trimmed elements, viz. these are of approximately the same magnitude for fitted (mesh-conforming) and unfitted (immersed) methods. For most variational problems the maximum in (9) is obtained by an oscillatory function with the highest frequency that can be captured by the grid, such as proven for the Poisson problem in Johnson . The norm and in magnitude largest eigenvalue of on the other hand behave very differently for mesh-fitting and immersed methods. Note that the largest eigenvalue of is equal to the reciprocal smallest eigenvalue of . For most variational problems, mesh-fitting methods with regular grids obtain the maximum in (10

) by a function that closely resembles the eigenfunction of the operator with the lowest eigenfrequency, see

e.g., Johnson for the Poisson problem. For immersed methods on the other hand, the maximum in (10) is generally attained by a function that is only supported on the element with the smallest volume fraction. This directly follows from (10): as is a linear operator on , will be small for a small function . When the system contains a cut element with a very small volume fraction, a function that is only supported on that element will be small in any norm. The volume fraction of this element does not affect the norm of the corresponding vector , however. Therefore there will be functions in the system with in any suitable norms. This yields very large values for which causes the ill-conditioning of immersed methods.

With high-order bases, the effect is exacerbated. As shown in (SIPIC, , Section 3.2), polynomial bases generally contain a function of the form:

(11)

that is only supported on the element with the smallest volume fraction and that corresponds to a coefficient vector with a magnitude of order one. In (11), is the mesh size, the order of the discretization, and a point on the boundary of the element. For small volume fractions such that . Note that we assume and to be isotropic. However, a similar relation holds for anisotropic and per spatial direction. It follow from (11) that high-order bases contain even smaller functions than lower order bases (both with a corresponding vector of norm ). For symmetric and elliptic second order problems, we have used this analysis to derive a lower bound for the condition number by the scaling relation (SIPIC, , Section 3.2):

(12)

for some constant independent of .

In the derivation of (12) a uniform grid was assumed, but a similar derivation can be established for nonuniform grids. Similar relations can also be derived for different single-field and mixed partial differential equations. The rate generally depends on the equation under consideration, and for mixed equations even the specific pair of function spaces (e.g., Taylor-Hood, Raviart-Thomas, etc.) has an effect. The scaling factor in this rate appears to be generic however. The derivation of the exact scaling relations for all variational problems considered in Section 5 is beyond the scope of this work, but the existence of such scaling relations is supported by the numerical results.

2.3 Algebraic preconditioning and limitations

If a system is preconditioned, one instead solves one of the the equivalent systems:

(13)

The matrices , and are chosen such that the preconditioned matrices , or have a smaller condition number or eigenvalue ratio than matrix itself. In the examples in Section 5 we restrict ourselves to left preconditioning, as left and right preconditioning are equivalent with respect to eigenvalues and as also has the same eigenvalues as . Generally or is a sparse approximation of , as this implies and .

The underlying mechanism of the ill-conditioning of immersed methods are functions that are only supported on cut elements with small volume fractions such that . The conceptual idea of the algebraic SIPIC preconditioner is to preclude the aforementioned effect by preconditioning the basis . This is done by instigating a change of basis with a nonsingular preconditioning matrix :

(14)

Basis spans the same space as due to the nonsingularity of , but by adequate construction of this preconditioner we prevent the occurrence of functions for which the magnitude of the corresponding coefficient vector is much larger than the function itself. For the system matrix this implies:

(15)

which has the same eigenvalues as the left preconditioned system:

(16)

with:

(17)

A simple choice for (and consequently ) is a diagonal matrix that linearly scales the basis functions. For positive definite systems this scaling can be derived from the square root of the diagonal of , such that all basis functions in are equal in the operator norm (i.e., ). This preconditioner is also known as diagonal scaling or Jacobi preconditioning.

In (SIPIC, , Section 4.1.2) it has been shown that linear scaling is generally not sufficient to obtain proper conditioning. On elements with small volume fractions, basis functions do not only become very small, but also higher order terms become less significant. As a result, scaled basis functions can be very similar to each other and therefore become almost linearly dependent. To bypass this effect, the SIPIC preconditioning scheme algebraically identifies and orthonormalizes the functions that are almost linearly dependent. This results in adding a few off-diagonal terms, leading to a very sparse (almost diagonal) preconditioner. After a permutation this leads to a block lower diagonal matrix of the form displayed in Figure 1(a).

The scaling and orthonormalization operations in the construction of the SIPIC preconditioner do require the variational form to be an inner product, which restricts this preconditioning approach to SPD problems. This precludes application of SIPIC to most flow problems. Furthermore, even though the method to identify the almost linearly dependent functions has shown to be very effective for smooth B-spline bases, this method is not always adequate for high-order bases with low regularity. The reason for this is that in such bases it can occur that sets of more than two basis functions are almost linearly dependent, but that in the SIPIC construction based on all possible pairs of functions in this set this dependence is not identified. This results in potential suboptimal performance of the SIPIC preconditioner for such systems. The CbAS preconditioner proposed in Section 3 does not suffer from these restrictions, and can hence be conceived of as a generalization and improvement of SIPIC.

3 Preconditioning immersed finite element methods

In this section we introduce the Connectivity-based Additive-Schwarz (CbAS) preconditioner. CbAS enables preconditioning of immersed finite element approximations of flow problems by generalization of two aspects of SIPIC. In Section 3 we first propose a systematic way of constructing and preconditioning connectivity-based matrix blocks. This makes CbAS applicable to high-order bases with low regularity and to problems with non-SPD matrices. In Section 4 we comment on the computational cost of the CbAS preconditioner. In Section 3.3 we propose a procedure to optimally apply CbAS to mixed problems.

3.1 The CbAS preconditioner for single-field problems

(a)
(b)
Figure 2: Illustrative sparsity pattern of (a) the lower diagonal preconditioner of the basis , and (b) the block-diagonal preconditioner . Note that both matrices are permuted and separable as they contain non-overlapping blocks.

To motivate the CbAS preconditioner, we first reconsider some of the properties underlying the construction of the SIPIC preconditioner. Similar to in Figure 1(a), the left SIPIC preconditioner can be permuted to a separable block diagonal matrix, see Figure 1(b). Furthermore, is equal to the inverse of the matrix that would result from restricting the SPD matrix to the same separable blocks of nonzero entries as , viz. the blocks of are the inverses of the blocks of . This is because corresponds to the inverse Cholesky decomposition of this restriction of , viz. the lower diagonal blocks of are the inverse Cholesky decompositions of the blocks of . Computing these block-wise Cholesky decompositions by the Gram-Schmidt orthonormalization procedure is one of the aspects that restrict the SIPIC preconditioning technique to SPD matrices. If we omit computing but directly compute the blocks of by inverting the blocks of , this restriction to SPD matrices is relaxed to being (block-wise) nonsingular. This condition is generally satisfied for well-posed variational forms. Therefore – except for identifying the functions that are almost linearly dependent – this generalizes the SIPIC preconditioning scheme to non-SPD matrices.

The procedure in (SIPIC, , Section 4.1.2) to identify the near linear dependencies in SIPIC is also restricted to SPD matrices. Additionally, the procedure has shown to be less robust for high-order bases with low regularity than for the smooth B-spline bases considered in SIPIC . A more robust procedure results, however, if the identification of these dependencies is approached in a similar manner as in domain decomposition preconditioning (see e.g., Smith1996 ; Toselli2005 ). It is well known that in the framework of Additive-Schwarz preconditioners, it is not necessary that these blocks are separable (i.e., non-overlapping), as is the case for SIPIC (see Figures 1(b) and 3). There exists a rich literature on Additive-Schwarz preconditioners for finite element methods (see e.g., Ferencz1998 ) and recent work for isogeometric analysis BeiraoDaVeiga2012 ; BeiraoDaVeiga2013Elasticity ; BeiraoDaVeiga2014 .

Figure 3: Illustrative sparsity pattern of a block-diagonal Additive-Schwarz preconditioner constructed by (18). Values of the red and green blocks and green and blue blocks are summed at the yellow and respectively cyan indices. Note that because of the overlap these blocks are not separable anymore in the assembled preconditioning matrix.

To provide a basis for the CbAS preconditioner developed herein we first recall some elementary aspects of Additive-Schwarz preconditioning. We denote the size of matrix by , the number of blocks by , the index of a block by and the rank of block by . For every block a projection matrix is constructed, consisting of the unit vectors of the indices of the functions in block . For example, if block contains the functions with indices , then creates a vector of length with at the indices as the only nonzero entries. The transpose of projection matrix is a reduction matrix, i.e.,  for any vector of length . The restriction of matrix to block is denoted by . Assuming that the blocks are invertible – e.g.,  derives from a coercive bilinear form – if the blocks are sufficiently small then it is feasible to invert each (directly or approximately) to form . The inverse of is projected into an matrix using the projection and restriction matrices, i.e., . This results in a sparse matrix with only nonzero entries at the cross indices of block . Finally these sparse matrices are summed to form the preconditioner:

(18)

Because is a summation of block-wise inverses of , it is intuitive to interpret as a sparse approximation of . Every index must be in at least one of the blocks, as otherwise contains empty rows and columns and therefore exhibits a null space. If this requirement is satisfied and all the blocks are non-overlapping, the preconditioner is again equal to the inverse of the restriction of to the same blocks. For SPD matrices the procedure in (18) then yields the same preconditioner as the local orthonormalization procedure in SIPIC. The Additive-Schwarz procedure can therefore be conceived of as a generalization of the local orthonormalization procedure in SIPIC in the sense that it can also treat non-SPD matrices and overlapping (non-separable) blocks.

If is symmetric positive definite the Additive-Schwarz lemma (commonly referred to as Lions’ lemma) holds Matsokin1985 ; Lions1988 (see Smith1996 ; Toselli2005 for this specific form):

(19)

This lemma indicates that Rayleigh quotients with are closely related to Rayleigh quotients with , emphasizing the intuitive interpretation of as a sparse approximation of . As discussed in Section 2.2, the underlying mechanism of ill-conditioning of immersed methods pertains to trimmed and therefore small or almost linearly dependent basis functions. Such basis functions can together form a function and corresponding coefficient vector with , such that in the SPD case . From (19) one can see that if such almost linearly dependent functions are in the same block, they will also form a small Rayleigh quotient with . The Additive-Schwarz preconditioner then effectively targets the fundamental conditioning problem of immersed methods. The Additive-Schwarz lemma in (19) is in principle restricted to SPD systems, but the procedure has been analyzed extensively for nonsymmetric and indefinite systems Cai1991 ; Cai1992 ; Widlund1992 ; Cai1993 ; Cai1994 ; Chan1994 ; Smith1996 ; Cai1998 ; Toselli2005 ; Sarkis2007 and has for example successfully been applied to mesh-conforming Navier-Stokes systems (see e.g., Bazilevs2010Weak ).

A fundamental aspect in the preconditioning of immersed finite element methods based on (18) is the way in which the blocks are constructed. The selection should be done in such a way that almost linearly dependent functions are in the same block. Evidently, there are various ways to satisfy this criterion for the selection procedure. Because almost linearly dependent functions necessarily have overlapping supports, the most straightforward way to do this is to use the connectivity of the basis functions and to devise one block for every element, containing all the functions that are supported on it. On elements that are not trimmed the functions are generally sufficiently orthogonal, and therefore it is more efficient to only devise blocks for trimmed elements. This requires a separate step for functions that are only supported on non-trimmed elements. For these functions we apply standard diagonal scaling, which can be conceived of as assigning a separate block to each such function. We will refer to the corresponding Additive-Schwarz preconditioner (18) with connectivity-based blocks for all trimmed elements and diagonal scaling for functions that are not contained in any of these blocks as the Connectivity-based Additive-Schwarz (CbAS) preconditioner.

We do note that the effectiveness of devising one block for every trimmed element does not necessarily extend to hierarchically refined grids, because of the connectivity structure for such grids. Also, the CbAS preconditioner may be enhanced with more advanced preconditioning techniques such as multigrid-type approaches (see e.g., Cai1991 ; Cai1992 ; Widlund1992 ; Smith1996 ; Toselli2005 ).

3.2 Computational cost

The computational cost of applying CbAS is limited. The number of basis functions that are supported per element is for every solution variable, with the order of the discretization and the number of dimensions. The cost of inverting the local matrices to construct the preconditioner is therefore limited. For fine grids, the number of trimmed elements is much smaller than the total number of elements. Typically, the number of trimmed elements scales with while the total number of elements scales with . The computational cost of setting up the preconditioner, storing the preconditioner, and multiplying vectors with the preconditioner is therefore only a fraction of the inevitable cost of assembling the complete system matrix, storing the matrix, and evaluating matrix-vector products. Moreover, the Additive-Schwarz procedure underlying CbAS is easily parallelizable, because the block-wise preconditioners can be computed fully separate.

It is possible to further reduce the number of blocks. We have observed similar condition numbers and eigenvalue ratios when blocks were only devised for elements with volume fractions below a certain threshold. Also these blocks can be further restricted to only the small functions (e.g., in the operator norm). However, because the computational cost involved in the preconditioning scheme described above is already small, this makes the procedure unnecessarily complicated.

For multivariate systems with vector-valued solutions the computational efficiency of CbAS can be enhanced further by exploiting the structure of such problems. In the case of linear elasticity, for example, the original basis consists of separate basis functions for each of the spatial directions. Basis functions in the preconditioned basis consist of linear combinations of basis functions of the original basis . Because basis functions in separate spatial dimensions can never become linearly dependent, this does not require the preconditioned basis to contain linear combinations of basis functions of the original basis in different spatial directions. The preconditioner and consequently can therefore be restricted to block-diagonal matrices with separate blocks for every spatial direction, which is illustrated for a two-dimensional elasticity problem in Figure 4. This has been applied for the velocity functions in the Stokes and Navier-Stokes test cases in Section 4.3 and Section 4.4.

(a) One block per element
(b) Separate blocks for every spatial direction
Figure 4: Typical sparsity patterns of the CbAS preconditioner for a two-dimensional elasticity problem.

Finally, the computational cost of the preconditioner can also be bounded when block is devised for every element, which can (by approximation) be the case when immersed finite element methods are applied to problems posed on porous domains. Because the entries in system matrix are only nonzero if the functions associated with these entries have intersecting supports, the preconditioner will in that case have exactly the same sparsity pattern as the system matrix. The required memory to store the preconditioner and the cost of multiplying a vector with the preconditioner will therefore be equal to the required memory and the cost of matrix-vector multiplication for the system matrix, respectively.

3.3 Extension to mixed problems

The CbAS preconditioner described in Section 3 can be extended to mixed problems, for which the solution vector represents different physical quantities. The block structure of such systems can be exploited in the construction of the preconditioner.

We restrict our considerations here to the prototypical Stokes and Navier-Stokes equations. In most variational forms of the Stokes and Navier-Stokes equations – which involve a velocity field and a pressure field – the pressure-pressure matrix block is empty. Hence, one cannot simply invert blocks of the pressure-pressure matrix similar to the multiple distinct spatial directions as we proposed in Section 4. To develop a preconditioner for the pressure we therefore need to consider the interaction between the pressure and the velocity functions. For a symmetric variational form of the Stokes equations, symmetric preconditioning with a block-diagonal matrix with separate blocks and for the velocity and pressure, respectively, yields:

(20)

The optimal preconditioner of the velocity-velocity matrix is simply its inverse:

(21)

and, hence:

(22)

Considering the eigenvalue problem for the matrix in (20) with an optimally preconditioned velocity-velocity matrix, it holds that:

(23)

where is the preconditioned velocity-pressure matrix. By block-wise decomposition of this eigenvalue problem, the eigenvalues, , are obtained as:

(24)

with the non-negative eigenvalues of the positive (semi)definite matrix:

(25)

The multiplicity of the eigenvalue is equal to the number of velocity functions minus the rank of the matrix in (25). Assuming this matrix is nonsingular (i.e., the pair of function spaces is inf-sup stable) this is equal to the number of velocity functions minus the number of pressure functions. The multiplicities of the eigenvalues and are equal to the multiplicity of the eigenvalue .

In view of the symmetry of , the condition number of the preconditioned matrix corresponds to the ratio between the (in magnitude) largest and smallest eigenvalue. Equation (24) relates the eigenvalue spectrum of this matrix to that of the positive (semi)definite matrix (25). The spectrum of the matrix (25) depends on the preconditioning matrices and , the former of which has already been fixed to optimally precondition the velocity-velocity matrix. The preconditioning matrix is optimal if the resulting condition number of is minimal. To derive the optimal preconditioner we first note that:

(26)

such that optimality requires . Formal optimization of yields resulting in:

(27)

All eigenvalues of are equal to if the matrix is equal to , such that:

(28)

From the last equality in this expression one then obtains an optimal preconditioning matrix, , for the pressure block:

(29)

The procedure for determining the pressure space preconditioning matrix, , according to (29), is as follows. One first computes as the CbAS preconditioner of the velocity-velocity matrix for each spatial direction. Using this velocity preconditioner one then computes as the CbAS preconditioner of the matrix .

For the Navier-Stokes equations, and consequently are not symmetric and for some stabilized variational forms also . As mentioned in Section 3, one can still apply the Additive-Schwarz framework to the nonsymmetric matrix for such problems.

We note that the use of separate preconditioning blocks for the Stokes problem (or other saddle point problems) has been explored before. It is a well-established concept to use an approximate inverse of for the velocity and to use an approximate inverse of the Schur complement for the pressure (see e.g., Murphy2000 ). In practice the factor in (27)-(29) does not significantly influence the condition number or the convergence behavior of iterative methods for the preconditioned system and could be omitted to simplify the procedure. For the examples in Section 5 this factor is retained, however.

Remark 3.1

Because the CbAS preconditioner is composed of a summation of local inverses, the preconditioner deviates from the inverse at entries corresponding to functions that appear in multiple blocks. More specifically, it holds that , where represents the number of blocks that contain the (velocity) basis function with index . The preconditioner can hence be further improved by diagonal scaling according to:

(30)

and, similarly for the pressure functions:

(31)

with the CbAS preconditioner of . In the examples in Section 5 we did, however, not observe a significant effect of rescaling (30) and (31) on the condition number.

4 Numerical examples

In this section we verify the effectiveness of the CbAS preconditioner proposed in Section 3. We consider four problems to test various aspects of the preconditioner: a Poisson problem with boundary conditions imposed by the symmetric and nonsymmetric Nitsche method in Section 4.1; an SUPG-stabilized convection-dominated convection-diffusion problem with the symmetric Nitsche method in Section 4.2; a Stokes flow problem with the symmetric Nitsche method in Section 4.3; and the steady and transient incompressible Navier-Stokes equations with the symmetric Nitsche method in Section 4.4. These problems range from a symmetric positive definite (SPD) single-field problem for which SIPIC is suitable, to nonsymmetric indefinite mixed problems which are well outside the scope of SIPIC.

Except for the transient Navier-Stokes problem in Section 4.4.2, all problems are posed on the same physical domain. This domain consists of an origin-centered unit square with, also at the origin, a circular exclusion of radius :

(32)

By convention we denote the horizontal direction by and the vertical direction by . The encapsulating domain is discretized by a uniform tensor product mesh with mesh size . The grid has a vertex at the origin and is initially aligned with the edges of the physical domain (i.e., ). The encapsulating domain (and consequently the grid) is then rotated in steps until . This generates different discretizations of the same domain with the same grid size but with varying smallest volume fractions . The physical domain, the encapsulating domain and the grid are depicted in Figure 5. For all different arrangements we assemble the full system matrix and compute the original and preconditioned condition number or eigenvalue ratio (see Section 2.2). These values provide good indications of the complexity of the system for either the Conjugate Gradient or the Generalized Minimal RESidual iterative solver.

Figure 5: Schematic representation of the physical domain embedded in a simple rectilinear encapsulating domain. The trimmed elements that intersect the physical domain are indicated in yellow and are used in the computation. The white elements do not intersect the domain and are not considered. The blue circles indicate volumetric integration points and the red squares indicate the Gauss points for boundary integration. Note that for graphical clarity the grid size in this figure is not to scale.

We consider discrete approximations based on isogeometric -continuous bivariate B-spline bases, , for the scalar single-field problems. For the two-field mixed problems, we consider the Taylor-Hood pair comprising the velocity space and the pressure space . The geometry is approximated by the bisection-based tessellation scheme proposed in Verhoosel2015 with a maximal refinement depth of three. The number of Gauss points is taken such that all functionals are integrated exactly over the tessellated domain. For second order terms these points are displayed in Figure 5. Note that variational forms with second order bases as used here contain fourth order terms and require more integration points than indicated in the figure. The condition numbers and eigenvalues are computed by a power algorithm that is terminated when the relative difference between subsequent iterations is smaller than .

4.1 Poisson’s problem with the symmetric and nonsymmetric Nitsche method

We consider the Poisson problem:

(33)

with the symmetric Nitsche1971 and nonsymmetric Freund1995 ; Burman2012 ; Schillinger2016 variants of Nitsche’s method to impose boundary conditions. The homogeneous Dirichlet condition is imposed on the boundary of the unit square and the homogeneous Neumann condition is imposed on the boundary of the circular exclusion.

The variational form of the problem with the symmetric Nitsche method, resulting in an SPD matrix, is:

(34)

In this formulation denotes the element-wise stabilization parameter that is required to ensure coercivity of the the bilinear operator. A lower bound for on each element, , is given by:

(35)

To compute we solve a local generalized eigenvalue problem following the approach in Embar2010 and we set . Let us mention that to improve the conditioning of this generalized eigenvalue problem we perform a local change of basis, as described in (SIPIC, , Appendix A). For shape regular trimmed elements it holds that with a typical length scale associated with the trimmed element . The variational form of the problem with the nonsymmetric Nitsche method, resulting in a nonsymmetric positive definite matrix, is:

(36)

Note that this nonsymmetric Nitsche method does not strictly require a stabilization parameter because the edge terms through cancel when Oden1998 ; Burman2012 . Both variational forms (34) and (36) are consistent with (33). On the full space both bilinear forms are not bounded and the bilinear form in (34) is not coercive, see e.g., ErnGuermond . Therefore (34) and (36) are posed on the finite dimensional space , on which boundedness and coercivity with respect to the -norm are satisfied for both bilinear forms.

In Figure 5(a) we present the condition numbers corresponding to the symmetric variational form (34) with and without the CbAS preconditioner. We plot the condition numbers versus the smallest volume fraction for all arrangements. Note that the same test case was considered in (SIPIC, , Section 4.3). Because of the smooth second order B-spline basis and the SPD character of the matrices, the SIPIC preconditioner yields results similar to those of CbAS, which illustrates the interpretation of CbAS as a generalization of SIPIC. We observe that without preconditioning the condition number scales with the smallest volume fraction to the power (see Section 2.2). We also observe that with CbAS preconditioning the system is well-conditioned and robust with respect to the smallest volume fraction, in the sense that for all volume fractions we obtain condition numbers in the range of and that a scaling relation is not observed. A typical CG solver convergence plot is shown in Figure 5(b), from which it is observed that CbAS preconditioning results in substantially improved convergence behavior of the iterative solver.

The eigenvalue ratios with and without preconditioning of the nonsymmetric variational form in (36) are plotted in Figure 6(a). This figure conveys that the CbAS preconditioner is also effective for this non-SPD system, as for all volume fractions we obtain eigenvalue ratios in the range of . The results show that the conditioning with the nonsymmetric Nitsche method is almost identical to the conditioning with the symmetric Nitsche method, both with and without preconditioning. This can be explained by the detailed analysis of the ill-conditioning mechanism presented in (SIPIC, , Section 3.2). Here it is shown that on small cut elements, the contribution of the stabilization terms in the symmetric form are of the same magnitude as the other terms. Therefore we expect approximately the same norm of and consequently the same conditioning of the symmetric and the nonsymmetric forms. The effect of CbAS preconditioning on the convergence of the residual in a GMRES solver is illustrated in Figure 6(b).

(a)
(b)
Figure 6: (a) Original and preconditioned condition numbers (using SIPIC and CbAS) for the Poisson problem with the traditional symmetric Nitsche method as in (34). (b) The relative (preconditioned) residual error versus the number of CG iterations for an arrangement with yielding .
(a)
(b)
Figure 7: (a) Original and preconditioned eigenvalue ratios for the Poisson problem with the nonsymmetric Nitsche method. (b) The relative (preconditioned) residual error versus the number of GMRES iterations for an arrangement with yielding .

4.2 Convection-diffusion prolem

We consider the convection-dominated convection-diffusion problem:

(37)

with field variable , convective velocity and diffusion coefficient , such that and the convection is clearly dominant. Because both the boundary of the unit square () and the boundary of the circular exclusion () are partially inflow and outflow boundaries, we pose Dirichlet conditions on the full boundary . The function prescribes on the lower boundary () and part of the left boundary ( and ) and prescribes on the remainder:

(38)

The function does not affect the system matrix and consequently does not affect the conditioning however. The solution to (37)–(38) for the limit is shown in Figure 8. Note that the boundary layers at the right boundary () and at the boundary of the circular exclusion () of thickness disappear for . For the boundary conditions are violated, such that a solution in no longer exists. This is evident as for the problem posed in (37) is no longer elliptic. The solution in Figure 8 is therefore only a limit in , as for the solutions diverge in .

Figure 8: Limit of the solution to the convection-diffusion problem (37) for . The non-homogeneous part of the Dirichlet boundary, , is indicated in green, and the green arrow indicates the direction of the convective velocity .

We employ the variational form introduced in Bazilevs2007 in which the convective terms are stabilized by Streamlined Upwind Petrov-Galerkin (SUPG) terms Brooks1982 and Dirichlet boundary conditions are imposed by Nitsche’s method:

(39)

The same element-wise stabilization parameter as in (34) is employed. Different choices for the SUPG parameter are motivated in Tezduyar2000 . In all our examples we use with the maximal element length in the direction of velocity . For uniform tensor product grids this implies , with the unit vector in the direction of a grid line. We therefore have:

(40)

Our computations based on according to (40) as a global parameter did not indicate a need to consider a local on trimmed elements. We note that in the setting considered here all trimmed elements also experience the stabilizing effect of the Nitsche terms, and that this observation does not necessarily extend to other immersed implementations of SUPG.

The eigenvalue ratios of the resulting non-SPD matrices without preconditioning and with CbAS preconditioning are plotted in Figure 8(a) for all arrangements. We again observe that without preconditioning the eigenvalue ratio scales with the smallest volume fraction to the power . The results indicate that with preconditioning the system is well-conditioned and robust with respect to the smallest volume fraction, as all obtained eigenvalue ratios lie in the range and no scaling relation can be observed. The resulting positive effect on the GMRES solver convergence behavior is illustrated in Figure 8(b).

(a)
(b)
Figure 9: (a) Original and preconditioned eigenvalue ratios for the convection-dominated convection-diffusion problem with boundary conditions imposed by Nitsche’s method. (b) The relative (preconditioned) residual error versus the number of GMRES iterations for an arrangement with yielding .

4.3 Stokes problem

A Stokes flow problem is considered according to:

(41)

In this formulation and denote respectively the velocity and the pressure, denotes the symmetric gradient operator and denotes the second order identity tensor. We impose a Poiseuille profile on the left (inflow) boundary () and homogeneous Dirichlet conditions (no-slip) on the lower and upper boundary () and on the boundary of the exclusion ():

(42)

Moreover we impose homogeneous Neumann (traction free) conditions on the right (outflow) boundary (). The boundary condition data does not affect the conditioning. The solution to (41) subject to these boundary conditions is plotted in Figure 10.

(a) Velocity magnitude
(b) Pressure
Figure 10: Solution to the Stokes problem (41) subject to the boundary conditions defined by (42). The inflow boundary and the outflow boundary are indicated in Figure 9(a) in respectively red and magenta.

The symmetric variational form with boundary conditions imposed by means of the symmetric Nitsche method is:

(43)

The element-wise stabilization constant is again set to , with here defined as:

(44)

A proper choice for the spaces and , such as the Taylor-Hood elements that we apply here, leads to inf-sup stability of (43). An analysis of the stability and accuracy of different pairs of function spaces for this immersed problem is presented in Hoang2017 .

The resulting symmetric but indefinite matrices are preconditioned by the CbAS preconditioner using the Schur complement as described in Section 3.3. The condition numbers with and without preconditioning are presented in Figure 10(a). We observe that, also for this two-field mixed problem, the condition number without preconditioning scales with the smallest volume fraction to the power . The CbAS preconditioner again provides a well-conditioned system that is robust with respect to the smallest volume fraction, with condition numbers ranging between and . Figure 10(b) illustrates the improved GMRES convergence behavior as a result of CbAS preconditioning.

(a)
(b)
Figure 11: (a) Original and preconditioned condition numbers for the Stokes problem with boundary conditions imposed by Nitsche’s method. (b) The relative (preconditioned) residual error versus the number of GMRES iterations for an arrangement with yielding .

4.4 Navier-Stokes problem

In this section we demonstrate the applicability of the CbAS preconditioner to immersed flow problems. We consider both a steady and a transient test case.

4.4.1 Steady Navier-Stokes problem

To elaborate the considered Navier-Stokes problem, we first consider the following boundary value problem corresponding to the steady Oseen equations: