1. Introduction
In this work, we prove optimal convergence rates for an adaptive isogeometric boundary element method for the (firstkind) hypersingular integral equation
(1.1) 
associated with the 2D Laplacian. Here, is a bounded Lipschitz domain, whose boundary can be parametrized via nonuniform rational Bsplines (NURBS); see Section 2 for the precise statement of the integral operators and as well as for definition and properties of NURBS. Given boundary data , we seek for the unknown integral density . We note that (1.1) is equivalent to the Laplace–Neumann problem
(1.2) 
where is the trace of the sought potential .
The central idea of isogeometric analysis (IGA) is to use the same ansatz functions for the discretization of (1.1), as are used for the representation of the problem geometry in CAD. This concept, originally invented in [HCB05] for finite element methods (IGAFEM) has proved very fruitful in applications; see also the monograph [CHB09]. Since CAD directly provides a parametrization of the boundary , this makes the boundary element method (BEM) the most attractive numerical scheme, if applicable (i.e., provided that the fundamental solution of the differential operator is explicitly known); see [PGK09, PGKF13] for the first works on isogeometric BEM (IGABEM) for 2D resp. 3D.
We refer to [SBTR12, PTC13, SBLT13, ADSS16, NZW17] for numerical experiments, to [HR10, TM12, MZBF15, DHP16, DHK18, DKSW18] for fast IGABEM based on wavelets, fast multipole, matrices resp. matrices, and to [HAD14, KHZvE17, ACD18, ACD18, FGK18] for some quadrature analysis.
On the one hand, IGA naturally leads to highorder ansatz functions. On the other hand, however, optimal convergence behavior with higherorder discretizations is only observed in simulations, if the (given) data as well as the (unknown) solution are smooth. Therefore, a posteriori
error estimation and related adaptive strategies are mandatory to realize the full potential of IGA. Rateoptimal adaptive strategies for IGAFEM have been proposed and analyzed independently in
[BG17, GHP17] for IGAFEM, while the earlier work [BG16] proves only linear convergence. As far as IGABEM is concerned, available results focus on the weaklysingular integral equation with energy space ; see [FGP15, FGHP16] for a posteriori error estimation as well as [FGHP17] for the analysis of a rateoptimal adaptive IGABEM in 2D, and [Gan17] for corresponding results for IGABEM in 3D with hierarchical splines. Recently, [FGPS18] investigated optimal preconditioning for IGABEM in 2D with locally refined meshes.In this work, we consider the hypersingular integral equation (1.1) with energy space . We stress that the latter is more challenging than the weaklysingular case, with respect to numerical analysis as well as stability of numerical simulations. Moreover, the present work addresses also the adaptive steering of the smoothness of the NURBS ansatz spaces across elements. The adaptive strategy thus goes beyond the classical
considered, e.g., in [FKMP13, Gan13, FFK14, FFK15] for standard BEM with piecewise polynomials. Moreover, while the adaptive algorithm from [FGHP17] only allows for a smoothness reduction (which makes the ansatz space larger), the new algorithm also stears the local increase of smoothness (which makes the ansatz space smaller). Additionally, we also account for the approximate computation of the righthand side. We prove that the new algorithm is rate optimal in the sense of [CFPP14]. Moreover, as a side result, we observe that the related approximation classes are independent of the smoothness of the ansatz functions.
To steer the algorithm, we adopt the weightedresidual error estimator from standard BEM [CS95, Car97, CMPS04, FFK15] and prove that it is reliable and weakly efficient, i.e.,
(1.3a)  
satisfies (with the arclength derivative ) that  
(1.3b) 
Here, is the local meshsize, and is the Galerkin solution with respect to some approximate discrete data . We compute by the orthogonal projection of onto discontinuous piecewise polynomials. We stress that data approximation is an important subject in numerical computations, and reliable numerical algorithms have to properly account for it. In particular, the benefit of our approach is that the implementation has to deal with discrete integral operators only. Since is usually nonsmooth with algebraic singularities, the stable numerical evaluation of would also require nonstandard (and problem dependent) quadrature rules, which simultaneously resolve the logarithmic singularity of as well as the algebraic singularity of . This is avoided by our approach. Finally, in the appendix, we generalize the presented results also to slit problems and the weaklysingular integral equation.
Outline
The remainder of the work is organized as follows: Section 2 provides the functional analytic setting of the boundary integral operators, the definition of the mesh, Bsplines and NURBS together with their basic properties. In Section 3, we introduce the new adaptive Algorithm 3.1 and provide our main results on a posteriori error analysis and optimal convergence in Theorem 3.3. The proof of the latter is postponed to Section 4, where we essentially verify the abstract axioms of adaptivity of [CFPP14] and sketch how they imply optimal convergence. Auxiliary results of general interest include a new Scott–Zhangtype operator onto rational splines (Section 4.3) and inverse inequalities (Section 4.4), which are wellknown for standard BEM. In Section 5, we underline our theoretical findings via numerical experiments. There, we consider both the hypersingular integral equation as well as weaklysingular integral equation. Indeed, the our results for the hypersingular case are briefly generalized in the appendix, where we also comment on slit problems.
2. Preliminaries
2.1. General notation
Throughout and without any ambiguity,
denotes the absolute value of scalars, the Euclidean norm of vectors in
, the cardinality of a discrete set, the measure of a set in (e.g., the length of an interval), or the arclength of a curve in . We write to abbreviate with some generic constant , which is clear from the context. Moreover, abbreviates . Throughout, meshrelated quantities have the same index, e.g., is the set of nodes of the partition , and is the corresponding local meshwidth function etc. The analogous notation is used for partitions resp. etc. We use to transform notation on the boundary to the parameter domain, e.g., is the partition of the parameter domain corresponding to the partition of . Throughout, we make the the following convention: If is a set of nodes and is defined for all , then(2.1) 
2.2. Sobolev spaces
The usual Lebesgue and Sobolev spaces on are denoted by and . For measurable , we define the corresponding seminorm
(2.2) 
with the arclength derivative . It holds that
(2.3) 
Moreover, is the space of functions, which have a vanishing trace on the relative boundary equipped with the same norm. Sobolev spaces of fractional order are defined by the
method of interpolation
[McL00, Appendix B]: For , let .For , Sobolev spaces of negative order are defined by duality , where duality is understood with respect to the extended scalar product . Finally, we define for all .
2.3. Hypersingular integral equation
The hypersingular integral equation (1.1) employs the hypersingular operator as well as the adjoint doublelayer operator . With the fundamental solution of the 2D Laplacian and the outer normal vector , these have the following boundary integral representations
(2.4) 
for smooth densities .
For , the hypersingular integral operator and the adjoint doublelayer operator are welldefined, linear, and continuous.
For connected and , the operator is symmetric and elliptic up to the constant functions, i.e., is elliptic. In particular
(2.5) 
defines an equivalent scalar product on with corresponding norm . Moreover, there holds the additional mapping property .
2.4. Boundary parametrization
We assume that is parametrized by a continuous and piecewise continuously differentiable path such that is injective. In particular, and are bijective. Throughout and by abuse of notation, we write for the inverse of resp. . The meaning will be clear from the context.
For the left and righthand derivative of , we assume that for and for . Moreover, we assume for all that for and .
2.5. Boundary discretization
In the following, we describe the different quantities, which define the discretization.
Nodes . Let and be a set of nodes. We suppose that for some with such that .
Multiplicity , , and knots . Let be some fixed polynomial order. Each interior node has a multiplicity and . For , we set
(2.7) 
The multiplicities induce the knot vector
(2.8) 
Elements and partition . Let be the partition of into compact and connected segments with .
Local meshsizes , and , . For each element , let be its arclength on the physical boundary and its length in the parameter domain. Note that the lengths and of an element are equivalent, and the equivalence constants depend only on . We define the local meshwidth function by . Additionally, we define by .
Local meshratio . We define the local meshratio by
(2.9) 
Patches and . For , we inductively define patches by
(2.10) 
The corresponding set of elements is defined as
(2.11) 
To abbreviate notation, we set and . If for some , we write and , where we skip the index for as before.
2.6. Meshrefinement
We suppose that we are given fixed initial knots . For refinement, we use the following strategy.
Algorithm 2.1.
Input: Knot vector , marked nodes , local meshratio .

Define the set of marked elements .

If both nodes of an element belong to , mark by adding it to .

For all other nodes in , increase the multiplicity if it is less or equal to . Otherwise mark the elements which contain one of these nodes, by adding them to .

Recursively enrich by and until .

Bisect all in the parameter domain by inserting the midpoint of with multiplicity one to the current knot vector.
Output: Refined knot vector .
The optimal 1D bisection algorithm in step (iii)–(iv) is analyzed in [AFF13]. Clearly, is finer than in the sense that is a subsequence of . For any knot vector on , we define as the set of all knot vectors on such that there exist knot vectors and corresponding marked nodes with , and . Note that , wherefore . We define the set of all admissible knot vectors on as
(2.12) 
According to [AFF13, Theorem 2.3], there holds for arbitrary that
(2.13) 
Indeed, one can easily show that coincides with the set of all knot vectors which are obtained via iterative bisections in the parameter domain and arbitrary knot multiplicity increases, which satisfy (2.13).
2.7. Bsplines and NURBS
Throughout this subsection, we consider an arbitrary but fixed sequence on with multiplicities which satisfy for and . Let denote the corresponding set of nodes with for . Throughout, we use the convention that . For , the th Bspline of degree is defined for inductively by
(2.14) 
The following lemma collects some basic properties of Bsplines; see, e.g., [dB86].
Lemma 2.2.
For , , and , the following assertions (i)–(vi) hold:

The set is a basis of the space of all rightcontinuous piecewise polynomials of degree on , which are, at each knot , times continuously differentiable if resp. continuous for .

For , the Bspline vanishes outside the interval . It is positive on the open interval .

For , the Bsplines is completely determined by the knots , wherefore we also write
(2.15) 
The Bsplines of degree form a (locally finite) partition of unity, i.e.,
(2.16) 
For with , it holds that
(2.17) where denotes the lefthand limit at .

For and , it holds for the right derivative
(2.18) ∎
In addition to the knots , we consider fixed positive weights with . For and , we define the th NURBS by
(2.19) 
Note that the denominator is locally finite and positive. For any , we define the spline space as well as the rational spline space
(2.20) 
2.8. Ansatz spaces
We abbreviate and suppose that additionally to the initial knots , are given initial weights with . To apply the results of Section 2.7, extend the knot sequence in the parameter domain, i.e., arbitrarily to with , , . For the extended sequence, we also write . We define the weight function
(2.21) 
Let be a knot vector and abbreviate . Outside of , we extend the corresponding knot sequence as before to guarantee that forms a subsequence of . Via knot insertion from to , Lemma 2.2 (i) proves the existence and uniqueness of weights with
(2.22) 
By choosing these weights, we ensure that the denominator of the considered rational splines does not change. These weights are just convex combinations of the initial weights ; see, e.g., [dB86, Section 11]. For , this shows that
(2.23) 
Moreover, from Lemma 2.2 (v) implies that . Finally, we extend arbitrarily to with , identify the extension with , and set
(2.24) 
Lemma 2.2 (i) shows that this definition does not depend on how the sequences are extended. We define the transformed basis functions
(2.25) 
We introduce the ansatz space
(2.26) 
w Lemma 2.2 (i) and (v) show that bases of these spaces are given by
(2.27) 
By Lemma 2.2 (i), the ansatz spaces are nested, i.e.,
(2.28) 
We define , where is throughout either the identity or the orthogonal projection onto the space of transformed piecewise polynomials
(2.29) 
The corresponding Galerkin approximation reads
(2.30) 
We note that the choice is only of theoretical interest as it led to instabilities in our numerical experiments, in contrast to the weaklysingular case [FGP15, FGHP16, Gan17].
3. Main result
In this section, we introduce a novel adaptive algorithm and state its convergence behavior.
3.1. Error estimators
Let . The definition of the error estimator (3.1) requires the additional regularity , which leads to due to the mapping properties of . Moreover, note that due to the mapping properties of and the fact that . Therefore, the following error indicators are welldefined. We consider the sum of weightedresidual error indicators by [CS95, Car97] and oscillation terms
(3.1a)  
where  
(3.1b) 
Recall the convention (2.1) for .
To incorporate the possibility of knot multiplicity decrease, we define the knots by decreasing the multiplicities of all nodes whose multiplicity is larger than and the original multiplicity if , i.e., and (where we set if ). Let
(3.2) 
denote the corresponding Scott–Zhangtype projection from Section 4.3 below. To measure the approximation error by multiplicity decrease, we consider the following indicators
(3.3) 
We define and as in (3.1).
3.2. Adaptive algorithm
We propose the following adaptive algorithm.
Algorithm 3.1.
Input: Adaptivity parameters , , , .
Adaptive loop: For each iterate the following steps (i)–(iv):

Compute Galerkin approximation .

Compute refinement and coarsening indicators and for all .

Determine an up to the multiplicative constant minimal set of nodes , which satisfies the Dörfler marking
(3.4) 
Determine a set of nodes with , which satisfies the following marking criterion
(3.5) and define as well as .

Generate refined intermediate knot vector and then coarsen knot vector to from by decreasing the multiplicities of all by one.
Output: Approximations and error estimators for all .
Remark 3.2.
(a) By additionally marking the nodes , we enforce that the neighboring elements of any node , marked for multiplicity decrease, are bisected. We emphasize that the enriched set still satisfies the Dörfler marking with parameter and is minimal up to the multiplicative constant .
(b) Algorithm 3.1 allows the choice and , and then formally coincides with the adaptive algorithm from [FGHP16] for the weaklysingular integral equation.
(c) Let even . If we choose in each step up to the multiplicative constant minimal such that , and define , then is as in Algorithm 3.1 (iii), then this leads to standard refinement with no multiplicity increase and thus no decrease (independently on how and are chosen).
3.3. Linear and optimal convergence
Our main result is that Algorithm 3.1 guarantees linear convergence with optimal algebraic rates. For standard BEM with piecewise polynomials, such a result is proved in [Gan13, FKMP13, FFK14] for weaklysingular integral equations and in [Gan13, FFK15] for hypersingular integral equations, where [FFK14, FFK15] also account for data oscillation terms. For IGABEM for the weaklysingular integral equation (but without knot multiplicity decrease), an analogous result is already proved in our recent work [FGHP17]. To precisely state the main theorem, let
(3.6) 
be the finite set of all refinements having at most knots more than .
Analogously to [CFPP14], we introduce the estimatorbased approximability constant
(3.7) 
By this constant, one can characterize the best possible convergence rate. In explicit terms, this constant is finite if and only if an algebraic convergence rate of for the estimator is possible for suitably chosen knot vectors. Similarly, we define
(3.8) 
and
(3.9) 
The constant characterizes the best possible convergence rate starting from when only bisection is used and all new nodes have multiplicity . The constant characterizes the best possible rate starting from the coarsest knot vector when only bisection is used and all new nodes have maximal multiplicity . Hence, characterizes the rate for standard BEM with continuous piecewise polynomial ansatz functions. Note that the constants coincide if .
The following theorem is the main result of our work. The proof is given in Section 4.
Theorem 3.3.
Let so that the weightedresidual error estimator is welldefined. Then, the estimator from (3.1) is reliable as well as weakly efficient, i.e., there exist such that, for all ,
Comments
There are no comments yet.