In this work, we prove optimal convergence rates for an adaptive isogeometric boundary element method for the (first-kind) hyper-singular integral equation
associated with the 2D Laplacian. Here, is a bounded Lipschitz domain, whose boundary can be parametrized via non-uniform rational B-splines (NURBS); see Section 2 for the precise statement of the integral operators and as well as for definition and properties of NURBS. Given boundary data , we seek for the unknown integral density . We note that (1.1) is equivalent to the Laplace–Neumann problem
where is the trace of the sought potential .
The central idea of isogeometric analysis (IGA) is to use the same ansatz functions for the discretization of (1.1), as are used for the representation of the problem geometry in CAD. This concept, originally invented in [HCB05] for finite element methods (IGAFEM) has proved very fruitful in applications; see also the monograph [CHB09]. Since CAD directly provides a parametrization of the boundary , this makes the boundary element method (BEM) the most attractive numerical scheme, if applicable (i.e., provided that the fundamental solution of the differential operator is explicitly known); see [PGK09, PGKF13] for the first works on isogeometric BEM (IGABEM) for 2D resp. 3D.
We refer to [SBTR12, PTC13, SBLT13, ADSS16, NZW17] for numerical experiments, to [HR10, TM12, MZBF15, DHP16, DHK18, DKSW18] for fast IGABEM based on wavelets, fast multipole, -matrices resp. -matrices, and to [HAD14, KHZvE17, ACD18, ACD18, FGK18] for some quadrature analysis.
On the one hand, IGA naturally leads to high-order ansatz functions. On the other hand, however, optimal convergence behavior with higher-order discretizations is only observed in simulations, if the (given) data as well as the (unknown) solution are smooth. Therefore, a posteriori
error estimation and related adaptive strategies are mandatory to realize the full potential of IGA. Rate-optimal adaptive strategies for IGAFEM have been proposed and analyzed independently in[BG17, GHP17] for IGAFEM, while the earlier work [BG16] proves only linear convergence. As far as IGABEM is concerned, available results focus on the weakly-singular integral equation with energy space ; see [FGP15, FGHP16] for a posteriori error estimation as well as [FGHP17] for the analysis of a rate-optimal adaptive IGABEM in 2D, and [Gan17] for corresponding results for IGABEM in 3D with hierarchical splines. Recently, [FGPS18] investigated optimal preconditioning for IGABEM in 2D with locally refined meshes.
In this work, we consider the hyper-singular integral equation (1.1) with energy space . We stress that the latter is more challenging than the weakly-singular case, with respect to numerical analysis as well as stability of numerical simulations. Moreover, the present work addresses also the adaptive steering of the smoothness of the NURBS ansatz spaces across elements. The adaptive strategy thus goes beyond the classical
considered, e.g., in [FKMP13, Gan13, FFK14, FFK15] for standard BEM with piecewise polynomials. Moreover, while the adaptive algorithm from [FGHP17] only allows for a smoothness reduction (which makes the ansatz space larger), the new algorithm also stears the local increase of smoothness (which makes the ansatz space smaller). Additionally, we also account for the approximate computation of the right-hand side. We prove that the new algorithm is rate optimal in the sense of [CFPP14]. Moreover, as a side result, we observe that the related approximation classes are independent of the smoothness of the ansatz functions.
|satisfies (with the arclength derivative ) that|
Here, is the local mesh-size, and is the Galerkin solution with respect to some approximate discrete data . We compute by the -orthogonal projection of onto discontinuous piecewise polynomials. We stress that data approximation is an important subject in numerical computations, and reliable numerical algorithms have to properly account for it. In particular, the benefit of our approach is that the implementation has to deal with discrete integral operators only. Since is usually non-smooth with algebraic singularities, the stable numerical evaluation of would also require non-standard (and problem dependent) quadrature rules, which simultaneously resolve the logarithmic singularity of as well as the algebraic singularity of . This is avoided by our approach. Finally, in the appendix, we generalize the presented results also to slit problems and the weakly-singular integral equation.
The remainder of the work is organized as follows: Section 2 provides the functional analytic setting of the boundary integral operators, the definition of the mesh, B-splines and NURBS together with their basic properties. In Section 3, we introduce the new adaptive Algorithm 3.1 and provide our main results on a posteriori error analysis and optimal convergence in Theorem 3.3. The proof of the latter is postponed to Section 4, where we essentially verify the abstract axioms of adaptivity of [CFPP14] and sketch how they imply optimal convergence. Auxiliary results of general interest include a new Scott–Zhang-type operator onto rational splines (Section 4.3) and inverse inequalities (Section 4.4), which are well-known for standard BEM. In Section 5, we underline our theoretical findings via numerical experiments. There, we consider both the hyper-singular integral equation as well as weakly-singular integral equation. Indeed, the our results for the hyper-singular case are briefly generalized in the appendix, where we also comment on slit problems.
2.1. General notation
Throughout and without any ambiguity,
denotes the absolute value of scalars, the Euclidean norm of vectors in, the cardinality of a discrete set, the measure of a set in (e.g., the length of an interval), or the arclength of a curve in . We write to abbreviate with some generic constant , which is clear from the context. Moreover, abbreviates . Throughout, mesh-related quantities have the same index, e.g., is the set of nodes of the partition , and is the corresponding local mesh-width function etc. The analogous notation is used for partitions resp. etc. We use to transform notation on the boundary to the parameter domain, e.g., is the partition of the parameter domain corresponding to the partition of . Throughout, we make the the following convention: If is a set of nodes and is defined for all , then
2.2. Sobolev spaces
The usual Lebesgue and Sobolev spaces on are denoted by and . For measurable , we define the corresponding seminorm
with the arclength derivative . It holds that
Moreover, is the space of functions, which have a vanishing trace on the relative boundary equipped with the same norm. Sobolev spaces of fractional order are defined by the
-method of interpolation[McL00, Appendix B]: For , let .
For , Sobolev spaces of negative order are defined by duality , where duality is understood with respect to the extended -scalar product . Finally, we define for all .
2.3. Hyper-singular integral equation
The hyper-singular integral equation (1.1) employs the hyper-singular operator as well as the adjoint double-layer operator . With the fundamental solution of the 2D Laplacian and the outer normal vector , these have the following boundary integral representations
for smooth densities .
For , the hyper-singular integral operator and the adjoint double-layer operator are well-defined, linear, and continuous.
For connected and , the operator is symmetric and elliptic up to the constant functions, i.e., is elliptic. In particular
defines an equivalent scalar product on with corresponding norm . Moreover, there holds the additional mapping property .
2.4. Boundary parametrization
We assume that is parametrized by a continuous and piecewise continuously differentiable path such that is injective. In particular, and are bijective. Throughout and by abuse of notation, we write for the inverse of resp. . The meaning will be clear from the context.
For the left- and right-hand derivative of , we assume that for and for . Moreover, we assume for all that for and .
2.5. Boundary discretization
In the following, we describe the different quantities, which define the discretization.
Nodes . Let and be a set of nodes. We suppose that for some with such that .
Multiplicity , , and knots . Let be some fixed polynomial order. Each interior node has a multiplicity and . For , we set
The multiplicities induce the knot vector
Elements and partition . Let be the partition of into compact and connected segments with .
Local mesh-sizes , and , . For each element , let be its arclength on the physical boundary and its length in the parameter domain. Note that the lengths and of an element are equivalent, and the equivalence constants depend only on . We define the local mesh-width function by . Additionally, we define by .
Local mesh-ratio . We define the local mesh-ratio by
Patches and . For , we inductively define patches by
The corresponding set of elements is defined as
To abbreviate notation, we set and . If for some , we write and , where we skip the index for as before.
We suppose that we are given fixed initial knots . For refinement, we use the following strategy.
Input: Knot vector , marked nodes , local mesh-ratio .
Define the set of marked elements .
If both nodes of an element belong to , mark by adding it to .
For all other nodes in , increase the multiplicity if it is less or equal to . Otherwise mark the elements which contain one of these nodes, by adding them to .
Recursively enrich by and until .
Bisect all in the parameter domain by inserting the midpoint of with multiplicity one to the current knot vector.
Output: Refined knot vector .
The optimal 1D bisection algorithm in step (iii)–(iv) is analyzed in [AFF13]. Clearly, is finer than in the sense that is a subsequence of . For any knot vector on , we define as the set of all knot vectors on such that there exist knot vectors and corresponding marked nodes with , and . Note that , wherefore . We define the set of all admissible knot vectors on as
According to [AFF13, Theorem 2.3], there holds for arbitrary that
Indeed, one can easily show that coincides with the set of all knot vectors which are obtained via iterative bisections in the parameter domain and arbitrary knot multiplicity increases, which satisfy (2.13).
2.7. B-splines and NURBS
Throughout this subsection, we consider an arbitrary but fixed sequence on with multiplicities which satisfy for and . Let denote the corresponding set of nodes with for . Throughout, we use the convention that . For , the -th B-spline of degree is defined for inductively by
The following lemma collects some basic properties of B-splines; see, e.g., [dB86].
For , , and , the following assertions (i)–(vi) hold:
The set is a basis of the space of all right-continuous -piecewise polynomials of degree on , which are, at each knot , times continuously differentiable if resp. continuous for .
For , the B-spline vanishes outside the interval . It is positive on the open interval .
For , the B-splines is completely determined by the knots , wherefore we also write
The B-splines of degree form a (locally finite) partition of unity, i.e.,
For with , it holds that
where denotes the left-hand limit at .
For and , it holds for the right derivative
In addition to the knots , we consider fixed positive weights with . For and , we define the -th NURBS by
Note that the denominator is locally finite and positive. For any , we define the spline space as well as the rational spline space
2.8. Ansatz spaces
We abbreviate and suppose that additionally to the initial knots , are given initial weights with . To apply the results of Section 2.7, extend the knot sequence in the parameter domain, i.e., arbitrarily to with , , . For the extended sequence, we also write . We define the weight function
Let be a knot vector and abbreviate . Outside of , we extend the corresponding knot sequence as before to guarantee that forms a subsequence of . Via knot insertion from to , Lemma 2.2 (i) proves the existence and uniqueness of weights with
By choosing these weights, we ensure that the denominator of the considered rational splines does not change. These weights are just convex combinations of the initial weights ; see, e.g., [dB86, Section 11]. For , this shows that
We introduce the ansatz space
We define , where is throughout either the identity or the -orthogonal projection onto the space of transformed piecewise polynomials
The corresponding Galerkin approximation reads
3. Main result
In this section, we introduce a novel adaptive algorithm and state its convergence behavior.
3.1. Error estimators
Let . The definition of the error estimator (3.1) requires the additional regularity , which leads to due to the mapping properties of . Moreover, note that due to the mapping properties of and the fact that . Therefore, the following error indicators are well-defined. We consider the sum of weighted-residual error indicators by [CS95, Car97] and oscillation terms
Recall the convention (2.1) for .
To incorporate the possibility of knot multiplicity decrease, we define the knots by decreasing the multiplicities of all nodes whose multiplicity is larger than and the original multiplicity if , i.e., and (where we set if ). Let
denote the corresponding Scott–Zhang-type projection from Section 4.3 below. To measure the approximation error by multiplicity decrease, we consider the following indicators
We define and as in (3.1).
3.2. Adaptive algorithm
We propose the following adaptive algorithm.
Input: Adaptivity parameters , , , .
Adaptive loop: For each iterate the following steps (i)–(iv):
Compute Galerkin approximation .
Compute refinement and coarsening indicators and for all .
Determine an up to the multiplicative constant minimal set of nodes , which satisfies the Dörfler marking
Determine a set of nodes with , which satisfies the following marking criterion
and define as well as .
Generate refined intermediate knot vector and then coarsen knot vector to from by decreasing the multiplicities of all by one.
Output: Approximations and error estimators for all .
(a) By additionally marking the nodes , we enforce that the neighboring elements of any node , marked for multiplicity decrease, are bisected. We emphasize that the enriched set still satisfies the Dörfler marking with parameter and is minimal up to the multiplicative constant .
(c) Let even . If we choose in each step up to the multiplicative constant minimal such that , and define , then is as in Algorithm 3.1 (iii), then this leads to standard -refinement with no multiplicity increase and thus no decrease (independently on how and are chosen).
3.3. Linear and optimal convergence
Our main result is that Algorithm 3.1 guarantees linear convergence with optimal algebraic rates. For standard BEM with piecewise polynomials, such a result is proved in [Gan13, FKMP13, FFK14] for weakly-singular integral equations and in [Gan13, FFK15] for hyper-singular integral equations, where [FFK14, FFK15] also account for data oscillation terms. For IGABEM for the weakly-singular integral equation (but without knot multiplicity decrease), an analogous result is already proved in our recent work [FGHP17]. To precisely state the main theorem, let
be the finite set of all refinements having at most knots more than .
Analogously to [CFPP14], we introduce the estimator-based approximability constant
By this constant, one can characterize the best possible convergence rate. In explicit terms, this constant is finite if and only if an algebraic convergence rate of for the estimator is possible for suitably chosen knot vectors. Similarly, we define
The constant characterizes the best possible convergence rate starting from when only bisection is used and all new nodes have multiplicity . The constant characterizes the best possible rate starting from the coarsest knot vector when only bisection is used and all new nodes have maximal multiplicity . Hence, characterizes the rate for standard BEM with continuous piecewise polynomial ansatz functions. Note that the constants coincide if .
The following theorem is the main result of our work. The proof is given in Section 4.
Let so that the weighted-residual error estimator is well-defined. Then, the estimator from (3.1) is reliable as well as weakly efficient, i.e., there exist such that, for all ,