    # Adaptive isogeometric boundary element methods with local smoothness control

In the frame of isogeometric analysis, we consider a Galerkin boundary element discretization of the hyper-singular integral equation associated with the 2D Laplacian. We propose and analyze an adaptive algorithm which locally refines the boundary partition and, moreover, steers the smoothness of the NURBS ansatz functions across elements. In particular and unlike prior work, the algorithm can increase and decrease the local smoothness properties and hence exploits the full potential of isogeometric analysis. We prove that the new adaptive strategy leads to linear convergence with optimal algebraic rates. Numerical experiments confirm the theoretical results. A short appendix comments on analogous results for the weakly-singular integral equation.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

In this work, we prove optimal convergence rates for an adaptive isogeometric boundary element method for the (first-kind) hyper-singular integral equation

 Wu=g:=(1/2−K′)ϕon Γ:=∂Ω (1.1)

associated with the 2D Laplacian. Here, is a bounded Lipschitz domain, whose boundary can be parametrized via non-uniform rational B-splines (NURBS); see Section 2 for the precise statement of the integral operators and as well as for definition and properties of NURBS. Given boundary data , we seek for the unknown integral density . We note that (1.1) is equivalent to the Laplace–Neumann problem

 −ΔP=0 in Ωsubject to Neumann % boundary conditions∂P/∂ν=ϕ on Γ, (1.2)

where is the trace of the sought potential .

The central idea of isogeometric analysis (IGA) is to use the same ansatz functions for the discretization of (1.1), as are used for the representation of the problem geometry in CAD. This concept, originally invented in [HCB05] for finite element methods (IGAFEM) has proved very fruitful in applications; see also the monograph [CHB09]. Since CAD directly provides a parametrization of the boundary , this makes the boundary element method (BEM) the most attractive numerical scheme, if applicable (i.e., provided that the fundamental solution of the differential operator is explicitly known); see [PGK09, PGKF13] for the first works on isogeometric BEM (IGABEM) for 2D resp. 3D.

We refer to [SBTR12, PTC13, SBLT13, ADSS16, NZW17] for numerical experiments, to [HR10, TM12, MZBF15, DHP16, DHK18, DKSW18] for fast IGABEM based on wavelets, fast multipole, -matrices resp. -matrices, and to [HAD14, KHZvE17, ACD18, ACD18, FGK18] for some quadrature analysis.

On the one hand, IGA naturally leads to high-order ansatz functions. On the other hand, however, optimal convergence behavior with higher-order discretizations is only observed in simulations, if the (given) data as well as the (unknown) solution are smooth. Therefore, a posteriori

error estimation and related adaptive strategies are mandatory to realize the full potential of IGA. Rate-optimal adaptive strategies for IGAFEM have been proposed and analyzed independently in

[BG17, GHP17] for IGAFEM, while the earlier work [BG16] proves only linear convergence. As far as IGABEM is concerned, available results focus on the weakly-singular integral equation with energy space ; see [FGP15, FGHP16] for a posteriori error estimation as well as [FGHP17] for the analysis of a rate-optimal adaptive IGABEM in 2D, and [Gan17] for corresponding results for IGABEM in 3D with hierarchical splines. Recently, [FGPS18] investigated optimal preconditioning for IGABEM in 2D with locally refined meshes.

In this work, we consider the hyper-singular integral equation (1.1) with energy space . We stress that the latter is more challenging than the weakly-singular case, with respect to numerical analysis as well as stability of numerical simulations. Moreover, the present work addresses also the adaptive steering of the smoothness of the NURBS ansatz spaces across elements. The adaptive strategy thus goes beyond the classical

 SOLVE⟶ESTIMATE⟶MARK⟶REFINE

considered, e.g., in [FKMP13, Gan13, FFK14, FFK15] for standard BEM with piecewise polynomials. Moreover, while the adaptive algorithm from [FGHP17] only allows for a smoothness reduction (which makes the ansatz space larger), the new algorithm also stears the local increase of smoothness (which makes the ansatz space smaller). Additionally, we also account for the approximate computation of the right-hand side. We prove that the new algorithm is rate optimal in the sense of [CFPP14]. Moreover, as a side result, we observe that the related approximation classes are independent of the smoothness of the ansatz functions.

To steer the algorithm, we adopt the weighted-residual error estimator from standard BEM [CS95, Car97, CMPS04, FFK15] and prove that it is reliable and weakly efficient, i.e.,

 η∙:=(∥h1/2∙((1/2−K′)ϕ∙−WU∙)∥2L2(Γ)+∥h1/2∙(ϕ−ϕ∙)∥2L2(Γ))1/2 (1.3a) satisfies (with the arclength derivative ∂Γ) that C−1rel∥u−U∙∥˜H1/2(Γ)≤η∙≤Ceff(∥h1/2∙∂Γ(u−U∙)∥2L2(Γ)+∥h1/2∙(ϕ−ϕ∙)∥2L2(Γ))1/2. (1.3b)

Here, is the local mesh-size, and is the Galerkin solution with respect to some approximate discrete data . We compute by the -orthogonal projection of onto discontinuous piecewise polynomials. We stress that data approximation is an important subject in numerical computations, and reliable numerical algorithms have to properly account for it. In particular, the benefit of our approach is that the implementation has to deal with discrete integral operators only. Since is usually non-smooth with algebraic singularities, the stable numerical evaluation of would also require non-standard (and problem dependent) quadrature rules, which simultaneously resolve the logarithmic singularity of as well as the algebraic singularity of . This is avoided by our approach. Finally, in the appendix, we generalize the presented results also to slit problems and the weakly-singular integral equation.

### Outline

The remainder of the work is organized as follows: Section 2 provides the functional analytic setting of the boundary integral operators, the definition of the mesh, B-splines and NURBS together with their basic properties. In Section 3, we introduce the new adaptive Algorithm 3.1 and provide our main results on a posteriori error analysis and optimal convergence in Theorem 3.3. The proof of the latter is postponed to Section 4, where we essentially verify the abstract axioms of adaptivity of [CFPP14] and sketch how they imply optimal convergence. Auxiliary results of general interest include a new Scott–Zhang-type operator onto rational splines (Section 4.3) and inverse inequalities (Section 4.4), which are well-known for standard BEM. In Section 5, we underline our theoretical findings via numerical experiments. There, we consider both the hyper-singular integral equation as well as weakly-singular integral equation. Indeed, the our results for the hyper-singular case are briefly generalized in the appendix, where we also comment on slit problems.

## 2. Preliminaries

### 2.1. General notation

Throughout and without any ambiguity,

denotes the absolute value of scalars, the Euclidean norm of vectors in

, the cardinality of a discrete set, the measure of a set in (e.g., the length of an interval), or the arclength of a curve in . We write to abbreviate with some generic constant , which is clear from the context. Moreover, abbreviates . Throughout, mesh-related quantities have the same index, e.g., is the set of nodes of the partition , and is the corresponding local mesh-width function etc. The analogous notation is used for partitions resp.  etc. We use    to transform notation on the boundary to the parameter domain, e.g., is the partition of the parameter domain corresponding to the partition of . Throughout, we make the the following convention: If is a set of nodes and is defined for all , then

 α∙:=α∙(N∙),whereα∙(S∙)2:=∑z∈S∙α∙(z)2for all S∙⊆N∙. (2.1)

### 2.2. Sobolev spaces

The usual Lebesgue and Sobolev spaces on are denoted by and . For measurable , we define the corresponding seminorm

 |u|H1(Γ0):=∥∂Γu∥L2(Γ0)for all u∈H1(Γ) (2.2)

with the arclength derivative . It holds that

 ∥u∥2H1(Γ)=∥u∥2L2(Γ)+|u|2H1(Γ)for all u∈H1(Γ). (2.3)

Moreover, is the space of functions, which have a vanishing trace on the relative boundary equipped with the same norm. Sobolev spaces of fractional order are defined by the

-method of interpolation

[McL00, Appendix B]: For , let .

For , Sobolev spaces of negative order are defined by duality , where duality is understood with respect to the extended -scalar product . Finally, we define for all .

All details and equivalent definitions of the Sobolev spaces are, for instance, found in the monographs [HW08, McL00, SS11].

### 2.3. Hyper-singular integral equation

The hyper-singular integral equation (1.1) employs the hyper-singular operator as well as the adjoint double-layer operator . With the fundamental solution of the 2D Laplacian and the outer normal vector , these have the following boundary integral representations

 (2.4)

for smooth densities .

For , the hyper-singular integral operator and the adjoint double-layer operator are well-defined, linear, and continuous.

For connected and , the operator is symmetric and elliptic up to the constant functions, i.e., is elliptic. In particular

 ⟨u,v⟩W:=⟨Wu,v⟩Γ+⟨u,1⟩Γ⟨v,1⟩Γ (2.5)

defines an equivalent scalar product on with corresponding norm . Moreover, there holds the additional mapping property .

With this notation and provided that , the strong form (1.1) is equivalently stated in variational form: Find such that

 ⟨u,v⟩W=⟨(1/2−K′)ϕ,v⟩Γfor all v∈H1/2(Γ). (2.6)

Therefore, the Lax-Milgram lemma applies and proves that (2.6) resp. (1.1) admits a unique solution . Details are found, e.g., in [HW08, McL00, SS11, Ste08].

### 2.4. Boundary parametrization

We assume that is parametrized by a continuous and piecewise continuously differentiable path such that is injective. In particular, and are bijective. Throughout and by abuse of notation, we write for the inverse of resp. . The meaning will be clear from the context.

For the left- and right-hand derivative of , we assume that for and for . Moreover, we assume for all that for and .

### 2.5. Boundary discretization

In the following, we describe the different quantities, which define the discretization.

Nodes . Let and be a set of nodes. We suppose that for some with such that .

Multiplicity , , and knots . Let be some fixed polynomial order. Each interior node has a multiplicity and . For , we set

 #∙S∙:=∑z∈S∙#∙z. (2.7)

The multiplicities induce the knot vector

 K∙=(z∙,1,…,z∙,1#∙z∙,1−times,…,z∙,n∙,…,z∙,n∙#∙z∙,n∙−times). (2.8)

Elements and partition .  Let be the partition of into compact and connected segments with .

Local mesh-sizes , and , . For each element , let be its arclength on the physical boundary and its length in the parameter domain. Note that the lengths and of an element are equivalent, and the equivalence constants depend only on . We define the local mesh-width function by . Additionally, we define by .

Local mesh-ratio . We define the local mesh-ratio by

 ˆκ∙ :=max{ˆhQ/ˆhQ′:Q,Q′∈Q∙ with Q∩Q′≠∅}. (2.9)

Patches and . For , we inductively define patches by

 π0∙(Γ0):=Γ0,πm∙(Γ0):=⋃{Q∈Q∙:Q∩πm−1∙(Γ0)≠∅}. (2.10)

The corresponding set of elements is defined as

 Πm∙(Γ0):={Q∈Q∙:Q⊆πm∙(Γ0)},i.e.,πm∙(Γ0)=⋃Πm∙(Γ0). (2.11)

To abbreviate notation, we set and . If for some , we write and , where we skip the index for as before.

### 2.6. Mesh-refinement

We suppose that we are given fixed initial knots . For refinement, we use the following strategy.

###### Algorithm 2.1.

Input: Knot vector , marked nodes , local mesh-ratio .

1. Define the set of marked elements .

2. If both nodes of an element belong to , mark by adding it to .

3. For all other nodes in , increase the multiplicity if it is less or equal to . Otherwise mark the elements which contain one of these nodes, by adding them to .

4. Recursively enrich by and until .

5. Bisect all in the parameter domain by inserting the midpoint of with multiplicity one to the current knot vector.

Output: Refined knot vector .

The optimal 1D bisection algorithm in step (iii)–(iv) is analyzed in [AFF13]. Clearly, is finer than in the sense that is a subsequence of . For any knot vector on , we define as the set of all knot vectors on such that there exist knot vectors and corresponding marked nodes with , and . Note that , wherefore . We define the set of all admissible knot vectors on as

 K:=refine(K0). (2.12)

According to [AFF13, Theorem 2.3], there holds for arbitrary that

 ˆhQ/ˆhQ′≤2ˆκ0for all Q,Q′∈Q∙ with Q∩Q′≠∅. (2.13)

Indeed, one can easily show that coincides with the set of all knot vectors which are obtained via iterative bisections in the parameter domain and arbitrary knot multiplicity increases, which satisfy (2.13).

### 2.7. B-splines and NURBS

Throughout this subsection, we consider an arbitrary but fixed sequence on with multiplicities which satisfy for and . Let denote the corresponding set of nodes with for . Throughout, we use the convention that . For , the -th B-spline of degree is defined for inductively by

 ˆB∙,i,0(t):=χ[t∙,i−1,t∙,i)(t),ˆB∙,i,p(t):=t−t∙,i−1t∙,i−1+p−t∙,i−1ˆB∙,i,p−1(t)+t∙,i+p−tt∙,i+p−t∙,iˆB∙,i+1,p−1(t)for p∈N. (2.14)

The following lemma collects some basic properties of B-splines; see, e.g., [dB86].

###### Lemma 2.2.

For , , and , the following assertions (i)–(vi) hold:

1. The set is a basis of the space of all right-continuous -piecewise polynomials of degree on , which are, at each knot , times continuously differentiable if resp. continuous for .

2. For , the B-spline vanishes outside the interval . It is positive on the open interval .

3. For , the B-splines is completely determined by the knots , wherefore we also write

 ˆB(⋅|t∙,i−1,…,t∙,i+p):=ˆB∙,i,p. (2.15)
4. The B-splines of degree form a (locally finite) partition of unity, i.e.,

 ∑i∈ZˆB∙,i,p=1on R. (2.16)
5. For with , it holds that

 ˆB∙,i,p(t∙,i−)=1andˆB∙,i+1,p(t∙,i)=1, (2.17)

where denotes the left-hand limit at .

6. For and , it holds for the right derivative

 ˆB′r∙,i,p=pt∙,i+p−1−t∙,i−1ˆB∙,i,p−1−pt∙,i+p−t∙,iˆB∙,i+1,p−1, (2.18)

In addition to the knots , we consider fixed positive weights with . For and , we define the -th NURBS by

 ˆR∙,i,p:=w∙,iˆB∙,i,p∑k∈Zw∙,kˆB∙,k,p. (2.19)

Note that the denominator is locally finite and positive. For any , we define the spline space as well as the rational spline space

 (2.20)

### 2.8. Ansatz spaces

We abbreviate and suppose that additionally to the initial knots , are given initial weights with . To apply the results of Section 2.7, extend the knot sequence in the parameter domain, i.e., arbitrarily to with , , . For the extended sequence, we also write . We define the weight function

 ˆW0:=N0−p∑k=1−pw0,kˆB0,k,p|[a,b). (2.21)

Let be a knot vector and abbreviate . Outside of , we extend the corresponding knot sequence as before to guarantee that forms a subsequence of . Via knot insertion from to , Lemma 2.2 (i) proves the existence and uniqueness of weights with

 ˆW0=N0−p∑k=1−pw0,kˆB0,k,p|[a,b)=N∙−p∑k=1−pw∙,kˆB∙,k,p|[a,b). (2.22)

By choosing these weights, we ensure that the denominator of the considered rational splines does not change. These weights are just convex combinations of the initial weights ; see, e.g., [dB86, Section 11]. For , this shows that

 wmin:=min(W0)≤min(W∙)≤ˆw≤max(W∙)≤max(W0)=:wmax. (2.23)

Moreover, from Lemma 2.2 (v) implies that . Finally, we extend arbitrarily to with , identify the extension with , and set

 Sp(K∙,W∙):={ˆS∙∘γ−1:ˆS∙∈ˆSp(ˆK∙,W∙}. (2.24)

Lemma 2.2 (i) shows that this definition does not depend on how the sequences are extended. We define the transformed basis functions

 R∙,i,p:=ˆR∙,i,p∘γ−1. (2.25)

We introduce the ansatz space

 X∙:={V∙∈Sp(K∙,W∙):V∙(γ(a))=V∙(γ(b−))}⊂H1(Γ) (2.26)

w Lemma 2.2 (i) and (v) show that bases of these spaces are given by

 {R∙,i,p:i=2−p,…,N∙−p−1}∪{R∙,1−p,p+R∙,N∙−p,p}. (2.27)

By Lemma 2.2 (i), the ansatz spaces are nested, i.e.,

 X∙⊆X∘% for all K∙∈K and K∘∈refine(K∙). (2.28)

We define , where is throughout either the identity or the -orthogonal projection onto the space of transformed piecewise polynomials

 Pp(Q∙):={ˆΨ∙∘γ−1:ˆΨ∙ is ˆQ∙-piecewise polynomial of degree p}. (2.29)

 ⟨U∙,V∙⟩W=⟨g∙,V∙⟩Γ for all V∙∈X∙,where g∙:=(1/2−K′)ϕ∙. (2.30)

We note that the choice is only of theoretical interest as it led to instabilities in our numerical experiments, in contrast to the weakly-singular case [FGP15, FGHP16, Gan17].

## 3. Main result

In this section, we introduce a novel adaptive algorithm and state its convergence behavior.

### 3.1. Error estimators

Let . The definition of the error estimator (3.1) requires the additional regularity , which leads to due to the mapping properties of . Moreover, note that due to the mapping properties of and the fact that . Therefore, the following error indicators are well-defined. We consider the sum of weighted-residual error indicators by [CS95, Car97] and oscillation terms

 η∙(z)2:=res∙(z)2+osc∙(z)2for all z∈N∙, (3.1a) where (3.1b)

Recall the convention (2.1) for .

To incorporate the possibility of knot multiplicity decrease, we define the knots by decreasing the multiplicities of all nodes whose multiplicity is larger than and the original multiplicity if , i.e., and (where we set if ). Let

 J∙⊖1:L2(Γ)→X∙⊖1 (3.2)

denote the corresponding Scott–Zhang-type projection from Section 4.3 below. To measure the approximation error by multiplicity decrease, we consider the following indicators

 μ∙(z):=∥h−1/2∙(1−J∙⊖1)U∙∥L2(π2p+1∙(z)) for all z∈N∙. (3.3)

We define and as in (3.1).

We propose the following adaptive algorithm.

###### Algorithm 3.1.

Input: Adaptivity parameters , , , .
Adaptive loop: For each iterate the following steps
(i)–(iv):

• Compute Galerkin approximation .

• Compute refinement and coarsening indicators and for all .

• Determine an up to the multiplicative constant minimal set of nodes , which satisfies the Dörfler marking

 θη2ℓ≤ηℓ(M1ℓ)2. (3.4)
• Determine a set of nodes with , which satisfies the following marking criterion

 μℓ(M−ℓ)2≤ϑη2ℓ, (3.5)

and define as well as .

• Generate refined intermediate knot vector and then coarsen knot vector to from by decreasing the multiplicities of all by one.

Output: Approximations and error estimators for all .

###### Remark 3.2.

(a) By additionally marking the nodes , we enforce that the neighboring elements of any node , marked for multiplicity decrease, are bisected. We emphasize that the enriched set still satisfies the Dörfler marking with parameter and is minimal up to the multiplicative constant .

(b) Algorithm 3.1 allows the choice and , and then formally coincides with the adaptive algorithm from [FGHP16] for the weakly-singular integral equation.

(c) Let even . If we choose in each step up to the multiplicative constant minimal such that , and define , then is as in Algorithm 3.1 (iii), then this leads to standard -refinement with no multiplicity increase and thus no decrease (independently on how and are chosen).

### 3.3. Linear and optimal convergence

Our main result is that Algorithm 3.1 guarantees linear convergence with optimal algebraic rates. For standard BEM with piecewise polynomials, such a result is proved in [Gan13, FKMP13, FFK14] for weakly-singular integral equations and in [Gan13, FFK15] for hyper-singular integral equations, where [FFK14, FFK15] also account for data oscillation terms. For IGABEM for the weakly-singular integral equation (but without knot multiplicity decrease), an analogous result is already proved in our recent work [FGHP17]. To precisely state the main theorem, let

 K(N):={K∙∈K:#∙N∙−#0N0≤N} (3.6)

be the finite set of all refinements having at most knots more than .

Analogously to [CFPP14], we introduce the estimator-based approximability constant

 ∥u∥As:=supN∈N0((N+1)sinfK∙∈K(N)η∙)∈R≥0∪{∞}for all s>0. (3.7)

By this constant, one can characterize the best possible convergence rate. In explicit terms, this constant is finite if and only if an algebraic convergence rate of for the estimator is possible for suitably chosen knot vectors. Similarly, we define

 (3.8)

and

 ∥u∥A1s:=supN∈N0((N+1)sinfK∙∈K1(N)η∙) and ∥u∥Aps:=supN∈N0((N+1)sinfK∙∈Kp(N)η∙). (3.9)

The constant characterizes the best possible convergence rate starting from when only bisection is used and all new nodes have multiplicity . The constant characterizes the best possible rate starting from the coarsest knot vector when only bisection is used and all new nodes have maximal multiplicity . Hence, characterizes the rate for standard BEM with continuous piecewise polynomial ansatz functions. Note that the constants coincide if .

The following theorem is the main result of our work. The proof is given in Section 4.

###### Theorem 3.3.

Let so that the weighted-residual error estimator is well-defined. Then, the estimator from (3.1) is reliable as well as weakly efficient, i.e., there exist such that, for all ,

 C−1rel∥u−U∙∥˜H1/2(Γ)≤η∙≤Ceff(∥h1/2∙∂Γ(u−U∙)∥2L2(Γ)+∥h