## 1. Introduction

In this paper we are concerned with the design and analysis of preconditioners for the fractional Laplacian with negative exponent. More specifically, let be a bounded -dimensional domain, and a parameter. We then consider the problem of finding satisfying

(1.1) |

where is given. Here, denotes the usual Sobolev space of square-integrable functions with square-integrable first order derivatives and zero trace on the boundary of , and denotes its dual space. Then is defined from the spectral decomposition of . Our aim in this work is to design efficient preconditioners for discretizations of .

Due to the negative exponent, common preconditioning strategies will fail in this context. In particular, for positive , behaves similarly to

in that the eigenfunctions corresponding to high eigenvalues are oscillatory, and vice versa. As such, the error from simple iteration schemes, like Richardson’s iteration, are relatively smooth and can be well-represented on a coarser function space. This observation suggests that multigrid operators can provide efficient preconditioners for

, and motivated the construction of additive multigrid preconditioners in [5]. However, in our current context the roles are reversed. The oscillatory eigenfunctions of correspond to the lower end of the spectrum. Then, neither simple smoothing procedures nor coarse grid correction will eliminate the oscillatory part of the error, and therefore we cannot hope for a straightforward multigrid method to work.The preconditioners proposed in this work will be based on the auxiliary space preconditioner framework, [32]. Of particular note is that the transfer operator, whose role is to relate the original space and the auxiliary space, will be a differential operator. Consequently, the preconditioner on the auxiliary space will have to be spectrally equivalent to the inverse of a differential operator raised to a positive, fractional power. To motivate this, let

denote the spectral interpolation (see

[27, Ch. 2]) between and , and the dual space of . Then is an isomorphism from to . Following the operator preconditioning framework in [28], an efficient preconditioner for (1.1) should be based on a linear, symmetric isomorphism , the canonical choice being the Riesz mapping . Consequently, the preconditioner should behave like a differential operator raised to a positive, fractional power. Then, roughly speaking, if consists of applications of any standard differential operator, a correction is needed to compensate for this overshoot in fractionality. This correction will then behave like the inverse of a fractional differential operator of positive order. In particular, we will see that is spectrally equivalent to . Here, is the operator realizing the inner product. Thus, the problem of preconditioning will be transferred to the problem of preconditioning , which is amenable to an analysis similar to the one made in [5]. This is an attractive idea because, as we will see, behaves similarly to , where preconditioning strategies based on multilevel decompositions have proved efficient, [2, 3, 20, 21, 23, 26].Preconditioners, and in particular preconditioners based on multilevel decompositions, for (1.1) have previously been studied. For , Bramble et al. designed a V-cycle multigrid operator in [10]. Their construction was based on posing (1.1) in the weaker inner product, where the operator they considered had spectral properties suitable for multigrid analysis. In [17], similar ideas were used to construct and analyze an additive multigrid operator. Hierarchical basis preconditioners, suitable for (1.1) when were constructed in [29]. These preconditioners were based on an -orthogonal decomposition into each level of the grid hierarchy, and thus restricting its use to wavelet spaces where such decompositions are feasible. This was remedied for finite element spaces of low order in [11] by replacing -projections onto each level by more cheaply computed operators. In all the preconditioners mentioned above, one drawback is that only simple scaling smoothers can be used, which might be seen as too restrictive. Lastly, in [30] the authors constructed optimal auxiliary space preconditioners for (1.1), but they needed to presuppose that a discrete version of was easily computable in the auxiliary space. We will in this work not assume such a discrete operator to be at our disposable. That is, the proposed preconditioners will not require the computation of , or the fractional power of any positive definite operator for that matter.

The reason for this design choice is that our main motivational application are coupled multihysics- and trace constraint problems, where fractional Sobolev spaces are part of a well-posed variational formulation, but the fractional Laplacian is absent from the operator characterizing the problem. As an illustrative example, let be a bounded domain , with or , and denotes a structure in or on its boundary with codimension . Consider the Poisson equation, in , with the constraint conditions on for given data and . Imposing the trace constraint weakly, similarly to how it was done in [4], yields a saddle point system of the form

(1.2) | ||||||

where is the trace operator. The solution is sought in . Rewriting (1.2) in matrix form, we have

where is an isomorphism from to . By the framework in [28], a preconditioner for a discretization of (1.2) should be based on a symmetric isomorphism , with the canonical choice being

(1.3) |

Cheaply computable operators, spectrally equivalent to are well known. The second block, is as such the challenging part when designing preconditioners based on (1.3). See also that the fractional Laplacian only appears in , and not in .

We remark that even if the above example is relatively simple, similar techniques can be used in problems where different PDEs are posed on separate domains and linked through some continuity conditions on a common interface . One or more of these continuity conditions can then be enforced weakly by use of Lagrange multipliers, which often will posed in a fractional Sobolev space. When preconditioning the resultant system, the problem of establishing a computationally feasible operator, spectrally equivalent to persists. For instance, in [25] the authors study a multiphysics problem posed on domains of different topological dimension, and continuity is imposed weakly using a Lagrange multiplier. Other applications can be found in [6], where the no-slip condition on the surface of a falling body in a fluid is imposed weakly, or in [31], where the potential jump on a membrane of a cardiac cell is treated similarly. If the embedded structure in (1.2) instead has codimension , then numerical experiments in [24] suggests that block diagonal preconditioners where one block is based on , with , provide efficient preconditioners.

The current paper can in a couple of ways be viewed as continuation of [5]. Firstly, we define efficient preconditioners for the fractional Laplacian when the exponent , complementing the preconditioners introduced in the previous work. Secondly, in this work we generalize the results from [5] to positive fractional powers of . The analysis will aim to substantiate the intuition that if additive multilevel methods are efficient for and , then “by interpolation” it should be efficient for every . We remark, however, that the analysis on these multilevel methods for fractional

operators assumes certain two-level error estimates on

that will go unproven in this work. This is an unsatisfactory state of affairs, but we do give an approach for how these error estimates can be proven, as well as motivate their veracity. The techniques we propose will borrow from [9], and would require a substantial additional toolset. As such, it is here left as future work.The remainder of the current paper is structured as follows. In section 2 we describe the notation used throughout the paper, as well as give brief introductions to the theory of interpolation spaces and some useful results in functional analysis. Section 3

is devoted to substantiating the above heuristic argument, and show that provided we are given efficient preconditioners for fractional

operators with positive exponent, we can construct efficient preconditioners for the fractional Laplacian with negative exponent. Then, in section 4 we propose such preconditioners as additive multigrid operators and give sufficient conditions under which they are efficient. Lastly, in section 5 we provide a series of numerical experiments verifying the theoretical results obtained in this work.## 2. Preliminaries

Let be a bounded, polygonal domain in , with boundary . We denote by the space of square integrable functions on , with inner product , and norm . We denote by the usual Sobolev space of functions in with all first-order derivatives also in . The closure of smooth functions with compact support in we denote by , and its dual space is . For , the inner product and norm of we denote by and , respectively. Further, we let

denote the Hilbert space of square-integrable vector fields on

with square-integrable divergence, while we write to mean the space of square-integrable vector fields on with square-integrable . We let denote the standard inner product on defined byIn general, a Hilbert space is equipped with an inner product and norm, which we denote by and , respectively, and its dual is denoted by . For two Hilbert spaces and , we write to mean the space of bounded linear operators , which we equip with the usual operator norm

Let now be a symmetric positive definite operator on a Hilbert space . For sake of simplicity, we assume the spectrum of to be wholly discrete, i.e. has empty continuous- and residual spectrum. Denote by the set of eigenpairs of , normalized so that

where is the Kronecker delta. Then , for forms an orthonormal basis of , and if has the representation , then

For , we define the fractional power of by

If is only positive semi-definite, then we must restrict to . If is another symmetric positive semi-definite operator on , we write if for every

holds. Note that is equivalent to saying that is positive semi-definite. In addition, we shall write to mean that for every .

A result in operator theory is the Löwner-Heinz inequality, which in our case states that if , then

(2.1) |

cf. for instance [22]. Inequality (2.1) means that the function with is operator monotone for . It follows that is operator convex (cf. [18, Thm. 2.1 and 2.5]), that is, for any two symmetric positive semi-definite operators and on a Hilbert space , the inequality

holds for every . A key result regarding operator convex functions is the Jensen’s operator inequality (cf. [19, Theorem 2.1]). The version we will use in the current work states that for any bounded, symmetric positive semi-definite operator on , and so that

(2.2) |

We will at numerous times in this paper be in a position where we want to use (2.2), but where is a contraction between different Hilbert spaces. Thus, we make the following slight generalization of (2.2).

###### Lemma 2.1.

Let and be two Hilbert spaces, and an operator satisfying on . Further, assume that is a bounded, symmetric positive semi-definite operator on . Then

(2.3) |

for every .

###### Proof.

See that (2.3) holds for and , so fix . We define the auxiliary Hilbert space , with inner product inherited from the inner products on and . Now, define linear operators and on as

A simple calculation then shows that

by the assumption on . Similarly,

for every . Then, we have from the standard Jensen’s inequality in (2.2) that

In particular, , which completes the proof. ∎

### 2.1. Interpolation spaces

In defining fractional Sobolev spaces and fractional spaces, we will use some results from interpolation theory, as presented in [27], and so we shall make a quick review.

Let and be separable Hilbert spaces with inner products and , and corresponding norms and , respectively. Furthermore, we assume that , with dense in and continuous injection. In this case we call and compatible.

Denote by the set of so that the linear form

is continuous in . Following the discussion in [27], we note that is dense in . Using Riesz’ representation theorem, there is a so that

The mapping defines an unbounded linear operator , which is defined by

(2.4) |

Clearly, is self-adjoint and positive. Using the spectral decomposition of self-adjoint operators, we may define the powers, , , of . We define interpolation spaces in the following way:

###### Definition 2.1.

Let and satisfy the above assumptions. For we define the interpolation space

(2.5) |

with norm given by the graph norm

(2.6) |

It follows by the definition that

The following is a key Theorem in interpolation theory.

###### Theorem 2.1.

Let and be two pairs of compatible Hilbert spaces. Further, let be a continuous operator , so that

Then , and

(2.7) |

where is a constant independent of , , and .

If we now make the identification , then is dense, with continuous embedding. Thus, the interpolation space is well-defined for according to definition 2.1. Moreover, we have that (cf. [27, Thm. 6.2])

(2.8) |

It is well-known that is densely and continuously embedded in , which implies that we can define the fractional Sobolev spaces for as

We go on to define as the closure in of smooth and compactly supported functions on , while for , we define

We note that this definition for negative fractional Sobolev spaces is equivalent to interpolation between and .

Similarly, we define the fractional space as

(2.9) |

### 2.2. Discrete interpolation spaces

The discrete variant of fractional operators can be constructed analogously to the continuous setting. Suppose is a finite-dimensional subspace. We can define the operator by

We note that because is finite-dimensional, all norms are equivalent, and in particular, is a bounded operator. Since is SPD, we can define its fractional powers for , and discrete fractional norms . When and , the norm coincides with the - and norm, respectively. Furthermore, for the discrete norm is equivalent to the norm, with constants of equivalence independent of (cf. [1, Proposition 3.2])

Suppose now that we have an additional finite-dimensional subspace . Analogously to before we can define the SPD operator , and its fractional powers , with . In the case of or we have that

However, this inheritance of bilinear forms fails when . Getting ahead of ourselves, the inheritance of bilinear forms is a common assumption in the design and analysis of multigrid algorithms. Therefore, that the inheritance fails to hold when can be detrimental. The following lemma shows that we are able to recover one of the key inequalities used in [13] in the analysis of multigrid algorithms on non-inherited bilinear forms.

###### Lemma 2.2.

Let . We have that restricted to

That is, for every

(2.10) |

###### Proof.

As already noted, for and (2.10) holds with equality, so for the remainder of the proof let .

## 3. Preconditioner for fractional Laplacian

In this section we will establish a way to construct preconditioners for when . We will begin by first considering the continuous setting, which will motivate the construction of preconditioners for a discretization of . We define by

In view of the interpolation theory discussed in the previous section, it is evident that is well-defined for any , and it is an isomorphism from to . We denote its inverse by , and consider the problem of finding so that

(3.1) |

for a given . To precondition (3.1), we seek a self-adjoint isomorphism , so that

(3.2) |

for some constant .

Now, consider the gradient operator, . It is clear that . On , we define

Using integration by parts, this reduces to the standard when . Moreover, we have that

Thus,

and Theorem 2.1 then implies that . In view of (2.8) and (2.9) we can rewrite this as

(3.3) |

Suppose now that we are given a self-adjoint isomorphism which for every satisfies

(3.4) |

for some constants independent of . We then define

(3.5) |

Our aim is to show that defined by (3.5) satisfies (3.2). We begin by observing that is self-adjoint and maps elements from to . Moreover, the mapping property of in (3.3) and the boundedness of imply that .

Establishing the lower bound of (3.2) is more difficult in that we want to interpolate between lower bounds on the gradient operator. However, Theorem 2.1 is not applicable in this setting. To overcome this problem, we will interpolate between bounds on a left-inverse, , of . In this work, we employ the Bogovskiĭ operator established in [15]. If is star-shaped with respect to an open ball , takes for a vector field the explicit form

Here, with support contained in and integrates to . It can be checked that is a left-inverse of , and satisfies

(3.6) |

see [15, Cor. 3.4]. We note that the definition of can be extended to general Lipschitz domains — as such domains are finite unions of star-shaped domains — with the same mapping properties. From (3.6) and Theorem 2.1 we have that

(3.7) |

Finally, we are in a position to prove that satisfies (3.2), and hence is a suitable preconditioner for (3.1). The result is stated in the following theorem.

###### Proof.

###### Remark 1.

With the definition of given in (3.5), we have essentially translated the problem of preconditioning to the problem of preconditioning . The advantage of this is that the latter problem has positive exponent, and so, as we will see, will have similar spectral properties to , for which efficient preconditioning strategies have been studied earlier.

### 3.1. Discrete setting

We will now use the construction of from the previous section as motivation to construct an analogous discrete operator. To that end, let be a shape-regular triangulation of , with characteristic mesh size . For , we let denote the space of all discontinuous, piecewise polynomials of degree at most , subordinate to . That is,

We further let be the Raviart-Thomas space of index , and the Nedelec space of first kind of index , both relative to the triangulation . It is then well-known that , and . We define the discrete gradient operator by

(3.11) |

and discrete curl operator by

(3.12) |

With these definitions, we have the discrete Helmholtz decomposition . That is, every can be written as

(3.13) |

for unique and . Cf. e.g. [3]. Moreover, this decomposition is orthogonal in both and .

To get a discrete analogue of the preconditioner in (3.5), we further need to define discrete counterparts to the operators and . To that end, we define the discrete Laplacian as , i.e. is the symmetric operator on that satisfies

(3.14) |

Lastly, since is a conforming discretization of , we simply take to be the restriction of to . In other words,

It is well-known that (cf. for instance [8]), with these particular choices of and , there is a indepedent of so that for every

(3.15) |

This implies that is surjective or, equivalently, that is injective. As a consequence, is not only symmetric, but also positive-definite, and so is well-defined for every . The discrete counterpart to (3.1) is then to find, for and , a such that

(3.16) |

To precondition (3.16), we seek a symmetric positive definite operator which is easy to compute and spectrally equivalent to , with constants of equivalence independent of . Using the previous continuous preconditioner defined in (3.5) as motivation, we will see that

(3.17) |

where is a symmetric positive definite operator spectrally equivalent to , leads to an efficient preconditioner for . The key result in this section is given in Theorem 3.2 below, whose proof will resemble the argument we made in the continuous setting. In particular, we must ensure that has the appropriate upper and lower bounds when and . As we will see, the intermediate cases will then follow from Jensen’s operator inequality.

For the upper bounds of , we have from the definitions of and that

(3.18) | ||||

which is the discrete analogue to . The discrete analogue to is simply that .

For the necessary lower bounds on , we define by according to the discrete Helmholtz decomposition (3.13). It is then evident that is the identity on . That satisfies the discrete analogues to (3.6) is given in the following lemma.

###### Lemma 3.1.

###### Proof.

We are now in a position to state and prove the main spectral equivalence result of this section, from which the spectral equivalence between given in (3.17) and will readily follow.

###### Theorem 3.2.

###### Proof.

Fix and . We begin by proving the second inequality of (3.20). Define . From (3.18) it follows that . Thus, Lemma 2.1 implies that

(3.21) |

Inserting the definition of into (3.21) yields

which is equivalent to the second inequality of (3.20).

In proving the first inequality of (3.20), we will again make use of Lemma 2.1. To that end, we now set , and from Lemma 3.1 it follows that . Thus, an application of Lemma 2.1 yields

which after inserting the definition of becomes

(3.22) |

From Lemma 3.1 . Pre- and post multplying this inequality by and using the Löwner-Heinz inequality (2.1), we deduce that

(3.23) |

We now use (3.22) together with (3.23) and pre- and post multiply by to get

Finally, multiplying from the left by and from the right by , and using that both and are the identity on , we arrive at

which is the first inequality of (3.20). ∎

Comments

There are no comments yet.