Accurate quadrature of nearly singular line integrals in two and three dimensions by singularity swapping

10/22/2019
by   Ludvig af Klinteberg, et al.
0

The method of Helsing and co-workers evaluates Laplace and related layer potentials generated by a panel (composite) quadrature on a curve, efficiently and with high-order accuracy for arbitrarily close targets. Since it exploits complex analysis, its use has been restricted to two dimensions (2D). We first explain its loss of accuracy as panels become curved, using a classical complex approximation result of Walsh that can be interpreted as "electrostatic shielding" of a Schwarz singularity. We then introduce a variant that swaps the target singularity for one at its complexified parameter preimage; in the latter space the panel is flat, hence the convergence rate can be much higher. The preimage is found robustly by Newton iteration. This idea also enables, for the first time, a near-singular quadrature for potentials generated by smooth curves in 3D, building on recurrences of Tornberg-Gustavsson. We apply this to accurate evaluation of the Stokes flow near to a curved filament in the slender body approximation. Our 3D method is several times more efficient (both in terms of kernel evaluations, and in speed in a C implementation) than the only existing alternative, namely, adaptive integration.

READ FULL TEXT VIEW PDF

Authors

page 11

page 23

05/26/2021

High-order close evaluation of Laplace layer potentials: A differential geometric approach

This paper presents a new approach for solving the close evaluation prob...
08/03/2020

On the Approximation of Local Expansions of Laplace Potentials by the Fast Multipole Method

In this paper, we present a generalization of the classical error bounds...
06/18/2019

An adaptive kernel-split quadrature method for parameter-dependent layer potentials

Panel-based, kernel-split quadrature is currently one of the most effici...
12/12/2020

Quadrature error estimates for layer potentials evaluated near curved surfaces in three dimensions

The quadrature error associated with a regular quadrature rule for evalu...
04/06/2022

The Challenge of Sixfold Integrals: The Closed-Form Evaluation of Newton Potentials between Two Cubes

The challenge of explicitly evaluating, in elementary closed form, the w...
06/03/2020

Fast multipole methods for evaluation of layer potentials with locally-corrected quadratures

While fast multipole methods (FMMs) are in widespread use for the rapid ...
07/22/2021

Continuity estimates for Riesz potentials on polygonal boundaries

Riesz potentials are well known objects of study in the theory of singul...

Code Repositories

linequad

Nearly singular quadrature for line integrals in 2D and 3D


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Integral equation methods enable efficient numerical solutions to piecewise-constant coefficient elliptic boundary value problems, by representing the solution as a layer potential generated by a so-called “density” function defined on a lower-dimensional source geometry [4, 32]. This work is concerned with accurate evaluation of such layer potentials close to their source, when this source is a one-dimensional curve embedded in 2D or 3D space. In 2D, this is a common occurrence: the curve is the boundary of the computational domain or an interface between different materials. In 3D, such line integrals represent the fluid velocity in non-local slender-body theory (SBT) for filaments in a viscous (Stokes) flow [29, 27, 17], and also may represent the fields in the solution of electrostatic [43] or elecromagnetic [11] response of thin wire conductors. Applications of the former include simulation of straight [46], flexible [47, 36] or closed-loop [34] fiber suspensions in fluids. SBT has recently been placed on a more rigorous footing as the small-radius asymptotic limit of a surface boundary integral formulation [34, 35, 31]. In this work we focus on non-oscillatory kernels arising from Laplace (electrostatic) and Stokes applications, although we expect that by singularity splitting (e.g. [20]) the methods we present could be adapted for oscillatory or Yukawa kernels.

For numerical solution, the density function must first be discretized [45, 32]. A variety of methods are used in practice. If the geometry is smooth enough, and the interacting objects are far enough away that the density is also smooth, then a non-adaptive density representation is sufficient. Such representations include low-order splines on uniform grids [47], global spectral discretization by the periodic trapezoid rule for closed curves [32, 18, 6], and, for open curves, global Chebychev [11, 36] or Legendre [46] expansions. However, if the density has regions where refinement is needed (such as corners or close interactions with other bodies), a composite (“panel”) scheme is better, as is common with Galerkin boundary-element methods [45]. On each panel a high-order representation (such as the Lagrange basis for a set of Gauss–Legendre nodes) is used; panels may then be split in either a graded [45, 25, 37] or adaptive [38, 42, 51] fashion.

Our goal is accurate and efficient evaluation of the potential due a density given on a panel, at arbitrarily close target points. Not only is this crucial for accurate solution evaluation once the density is known, but is also key to constructing matrix elements for a Nyström or Galerkin solution [45, 32] in the case when curves are close to each other [24, 37, 6]. Specifically, let the panel be an open smooth curve in , where or . We seek to evaluate the layer potential

(1)

at the target point . Here the density is a smooth function defined on , and is a kernel that has a singularity as . Letting

denote some smooth (possibly tensor-valued) function on

, in two dimensions the dominant singularity can be either logarithmic,

(2)

or of power-law form

(3)

where is generally even. In 3D only the latter form arises, for

generally odd. We will assume that

is described by the parametrization that maps the standard interval to ,

(4)

Thus the parametric form of the desired integral (1) is

(5)

where is the pullback of the density to parameter space.

The difficulty of evaluation of a layer potential with given density is essentially controlled by the distance of the target point from the curve. In particular, if is far from , the kernel varies, as a function of , no more rapidly than the curve geometry itself. Thus a fixed quadrature rule, once it integrates accurately the product of density and any geometry functions (such as the “speed” ), is accurate for all such targets. Here, “far” can be quantified relative to the local quadrature node spacing and desired accuracy: e.g. in the 2D periodic trapezoid rule setting the distance should be around times the desired number of correct digits (a slight extension of [7, Remark 2.6]). However, as approaches , becomes increasingly non-smooth in the region where is near , and any fixed quadrature scheme loses accuracy. This is often referred to as the layer potential integral being nearly singular.

There are several approaches to handling the nearly singular case. The most crude is to increase the number of nodes used to discretize the density; this has the obvious disadvantage of growing the linear system size beyond that needed to represent the density, wasting computation time and memory. Slightly less naive is to preserve the original discretization nodes, but interpolate from these nodes onto a refined quadrature scheme (with higher order or subdivided panels

[47]) used only for potential evaluations. However, to handle arbitrarily close targets one has to refine the panels in an adaptive fashion, afresh for each target point. As a target approaches an increasing level of adaptive refinement is demanded. The large number of conditionals and interpolations needed in such a scheme can make it slow.

This has led to more sophisticated “special” rules which do not suffer as the target approaches the curve. They exploit properties of the PDE, or, in 2D, the relation of harmonic to analytic functions. In the 2D high-order setting, several important contributions have been made in the last decade, including panel-based kernel-split quadrature [24, 19, 37, 20, 22], globally compensated spectral quadrature [24, 6], quadrature by expansion (QBX) [7, 30, 49], harmonic density interpolation [40], and asymptotic expansion [12]. In 3D, little work has been done on high-order accurate evaluation of nearly singular line integrals. Many of the applications are in the context of active or passive filaments in viscous fluids, but the quadrature methods are limited to either using analytical formulae for straight segments [46], or regularizing the integrand using an artificial length scale [13, 26, 47].

In this paper, we propose a new panel-based quadrature method for nearly singular line integrals in both 2D and 3D. The method incurs no loss of efficiency for targets arbitrarily close to the curve. (Note that, in the 2D case it is usual for one-sided limits on

to exist, but in 3D limits on do not generally exist.) Our method builds on the panel-based monomial recurrences introduced by Helsing–Ojala [24] and extended by Helsing and co-workers [19, 23, 21, 37] [22, Sec. 6]

, with a key difference that, instead of interpolating in the physical plane (which is associated with the complex plane), we interpolate in the (real) parametrization variable of the panel. Rather than exploiting Cauchy’s theorem in the spatial complex plane, the singularity is “swapped” by cancellation for one in the parameter plane; this requires a nearby root-finding search for the (analytic continuation of the) function describing the distance between the panel and the target. It builds on recent methods for computing quadrature error estimates in two dimensions

[3]. We include a robust and efficient method for such root searches.

This new formulation brings two major advantages:

  1. The rather severe loss of accuracy of the Helsing–Ojala method for curved panels—which we quantify for the first time via classical complex approximation theory—is much ameliorated. As we demonstrate, more highly curved panels may be handled with the same number of nodes, leading to higher efficiency.

  2. The method extends to arbitrary smooth curves in 3D. Remarkably, the search for a root in the complex plane is unchanged from the 2D case.

As a side benefit, our method automatically generates the information required to determine whether it is needed, or whether plain quadrature using the existing density nodes is adequate, using results in [2, 3].

The structure of this paper is as follows. Section 2 deals with the 2D problem: we review the error in the plain (direct) quadrature (section 2.1), review the complex interpolatory method of Helsing–Ojala (section 2.2), then explain its loss of accuracy for curved panels (section 2.3). We then introduce and numerically demonstrate the new “singularity swap” method (section 2.4). Section 3 extends the method to deal with problems in 3D. Section 4 summarizes the entire proposed algorithm, unifying the 2D and 3D cases. Section 5 presents further numerical tests of accuracy in 2D, and accuracy and efficiency in a 3D Stokes slender body application, relative to standard adaptive quadrature. We conclude in section 6.

2 Two dimensions

Section 2.1 reviews the convergence rate for direct evaluation of the potential, i.e. using the native quadrature scheme; this also serves to introduce complex notation and motivate special quadratures. Section 2.2 summarizes the complex monomial interpolatory quadrature of Helsing–Ojala [24, 19] for evaluation of a given density close to its 2D source panel. In Section 2.3 we explain and quantify the loss of accuracy due to panel bending. Finally in Section 2.4 we use the “singularity swap” to make a real monomial version that is less sensitive to bending, and which will form the basis of the 3D method.

2.1 Summary of the direct (native) evaluation error near a panel

Here we review known results about the approximation of the integral (5) directly using a fixed high-order quadrature rule. In the analytic panel case this is best understood by analytic continuation in the parameter. Let be the nodes and the weights for an -node Gauss–Legendre scheme on , at which we assume the density is known.111We choose this rule since it is the most common; however, any other high-order rule with an asymptotic Chebychev density on would behave similarly. Substituting the rule into (5) gives the direct approximation

(6)

where the nodes are , the density samples , and the modified weights . For any integrand analytic in a complex neighborhood of , the error

(7)

vanishes exponentially with , with a rate which grows with the size of this neighborhood. Specifically, define , the Bernstein -ellipse, as the image of the disc under the Joukowski map

(8)

(For example, fig. 1(a) shows this ellipse for .) Then, if is analytic and bounded in , there is a constant depending only on and such that

(9)

An explicit bound [48, Thm. 19.3] is .

Returning to (5), the integrand of interest is , for which we seek the largest such that is analytic in . Bringing near induces a singularity near in the analytic continuation in of , a claim justified as follows. Each of the kernels under consideration has a singularity when the squared distance

(10)

goes to zero, which gives . Taking the negative case, the singularity (denoted by ) thus obeys

(11)

and one may check by Schwarz reflection that the positive case gives its conjugate . This suggests identifying with and introducing

(12)
(13)
(14)

Thus, in this complex notation, the equation (11) for is written

(15)

For now we assume that is analytic in a sufficiently large complex neighborhood of , so the nearest singularity is , the preimage of under ; see upper and lower panels of fig. 1(a). Thus our claim is proved.

Then inverting (8) gives the elliptical radius of the Bernstein ellipse on which any lies,

(16)

We then have, for any of the kernels under study, that (9) holds for any . Thus images of the Bernstein ellipses, , control contours of equal exponential convergence rate, and thus (assuming that the prefactor does not vary much with the target point), at fixed , also the contours of equal error magnitude. A more detailed analysis for the Laplace double-layer case in [2] [3, Sec. 3.1] improves222In that work the tool was instead the asymptotics of the characteristic remainder function of Donaldson–Elliott [15], and was left in the form (16). the rate to , predicts the constant, and verifies that the error contours very closely follow this prediction. Indeed, the lower panel of our fig. 1(a) illustrates, for an analytic choice of density , that the contour of constant error (modulo oscillatory “fingering” [3]) well matches the curve passing through the target . The top-left plots in Figures 2 and 3 show the shrinkage of these images of Bernstein ellipses as grows.

For kernels other than the Laplace double-layer, error results are similar [30, 2]. This quantified breakdown in accuracy as the target approaches motivates the special quadratures to which we now turn.

2.2 Interpolatory quadrature using a complex monomial basis

We now review the special quadrature of Helsing and coworkers [24, 19, 37] that approximates (1) in the 2D case. In the complex notation of (14), given a smooth density defined on , for all standard PDEs of interest (Laplace, Helmholtz, Stokes, etc), the integrals that we need to evaluate can be written as the contour integrals

(17)
(18)

Appendix A reviews how to rewrite some boundary integral kernels as such contour integrals; note that may include other smooth factors than merely the density. Other 2nd-order elliptic 2D PDE kernels may be split into terms of the above form, as pioneered by Helsing and others in the Helmholtz [21], axisymmetric Helmholtz [23], elastostatics [19, Sec. 9], Stokes [38], and (modified) biharmonic [22, Sec. 6] cases.

An innovation of the method was to interpolate the density in the complex coordinate rather than the parameter . Thus,

(19)

This is most numerically stable if for now we assume that has been transformed by rotation and scaling to connect the points . Here the (complex) images of the quadrature nodes are . The monomial coefficients can be found by collocation at these nodes, as follows. Let

be the column vector with entries

, be the vector of values , and be the Vandermonde matrix with entries , for . Then enforcing (19) at the nodes results in the linear system

(20)

which can be solved for the vector . Note that working in this monomial basis, even in the case of on the real axis, is successful even up to quite high orders, although this is rarely stated in textbooks.

Remark 1 (Backward stability).

It is known for being any set of real or complex nodes that is not close to the roots of unity that the condition number of the Vandermonde matrix in (20) must grow at least exponentially in [16, 39]. However, as pointed out in [24, App. A], extreme ill-conditioning in itself introduces no loss of interpolation accuracy, at least for . Specifically, as long as a vector has small relative residual , then the resulting monomial is close to the data at the nodes. Assuming weak growth in the Lebesgue constant with , the interpolant is thus uniformly accurate. Such a small residual norm, if it exists for the given right-hand side, is found by a solver that is backward stable. We recommend either partially pivoted Gaussian elimination (e.g. as implemented by mldivide in MATLAB) which is , or the Björck-Pereyra algorithm [10] which is .

As with any interpolatory quadrature, once the coefficients have been computed, the integrals (17) and (18) can be approximated as

(21)
(22)

where the values and are the exact integrals of the monomial densities

(23)
(24)

The benefit of this (somewhat unusual) complex monomial basis is that these integrals can be efficiently evaluated through recursion formulae, as follows.

2.2.1 Exact evaluation of and by recurrence

Recall that has been transformed by rotation and scaling to connect the points . We first review the Cauchy kernel case [24, Sec. 5.1]. The contour integral for exploits independence of the path :

(25)

Here is a winding number that, for the standard branch cut of the logarithm, is zero if is outside the domain enclosed by the oriented closed curve given by traversed forwards plus traversed backwards, and () when is inside a region enclosed counterclockwise (clockwise) [24, Sec. 7]. The following 2-term recurrence is derived by adding and subtracting from the numerator of the formula (24):

(26)

One can then recur upwards to get for all . For we get analogously,

(27)
(28)

Thus once all values for are known, they can be found for by upwards recurrence. Note that in [19, Sec. 2.2], a different recursion was used for the case . Finally, recursive computation of the complex logarithmic kernel follows directly, following [19, Sec. 2.3],

(29)

We emphasize that these simple exact path-independent formulae are enabled by the choice of the complex monomial basis in the coordinate .

Remark 2 (Transforming for general panel endpoints).

We now review the changes that are introduced to the answers and when rescaling and rotating a general panel parameterized by , , into the required one which connects , and similarly transforming the target. Define the complex scale factor and origin . Then the affine map from a given location to the transformed location , given by

(30)

can be applied to all points on the given panel and to the target to give a transformed panel and target . From this, applying the formulae above one can evaluate and . By inserting the change of variables one then gets the desired potentials due to the density on at target ,

Here a new integral is needed (the total “charge”), easily approximated by , where are the Gauss–Legendre weights for .

2.2.2 Adjoint method for weights applying to arbitrary densities

The above procedure computes and of (17)–(18) for a specific function (such as the density) on , using its samples on the nodes to solve first for a coefficient vector . In practice it is more useful instead to compute the weight vectors and which, for any vector , approximate and when their inner products are taken against . That is, and , where indicates non-conjugate transpose. One application is for filling near-diagonal blocks of the Nyström matrix arising in a boundary integral formulation: and form row blocks of the matrix.

We review a result presented in [24, Eq. (51)] (where a faster numerical method was also given in the case). Let be either or , let be the corresponding integral (17) or (18), and let be the corresponding vector or as computed as in Section 2.2.1. Writing our requirement for , we insert (20) to get

(31)

where the last form simply expresses (21) or (22). But this last equality must hold for all , thus

(32)

which is a linear system whose solution is the weight vector . Notice that this is an adjoint method [28].

Figure 1: Native and complex-interpolatory evaluation errors for a simple parabolic panel , for (complex double-layer) with density as a function of parameter . (a) -plane preimage (top plot) mapping to panel (bottom plot) via with curvature , hence minimum radius of curvature . Also shown are: Gauss–Legendre points and (bottom with colormap) their error in potential evaluation using the native (direct) scheme; an example target and its preimage (large green dots); Bernstein ellipse for and its image (dotted red); Schwarz singularity and its preimage (red ’s); and Cartesian grid lines (grey) in the -plane and their images (white). (b) Equipotential curves of for the same panel , showing “shielding” of the Schwarz singularity (red ) thus small and . Contrast for the Schwarz preimage in the -plane. (c) Convergence in of polynomial approximation error of over (solid lines) for several curvatures , compared to (dotted) predicted by section 2.3.

2.3 Effect of panel curvature on Helsing–Ojala convergence rate

The Helsing–Ojala method reviewed above is easy to implement, computationally cheap, and can achieve close to full numerical precision for target points very close to . However, there is a severe loss of accuracy as becomes curved. This is striking in the central column of plots in fig. 2. This observation led Helsing–Ojala [24] and later users to recommend upsampling from a baseline to a larger , such as , even though the density and geometry were already accurately resolved at . The “HO” plot in fig. 3 shows that even using has loss of accuracy near a moderately curved panel. This forces one to split panels beyond what is needed for the density representation, just to achieve good evaluation accuracy, increasing the cost.

The error in this method is essentially entirely due to the error of the polynomial approximation (19) of on , because each monomial is integrated exactly (up to rounding error) as in Section 2.2.1. For this error there is a classical result that the best polynomial approximation converges geometrically in the degree, with rate controlled by the largest equipotential curve of in which is analytic. Namely, given the set , let solve the potential problem in (which we identify with ),

For each , define333Note that the reuse of the radius symbol from section 2.1 is deliberate, since in the special case , is the boundary of the Bernstein ellipse . the level curve . An example and its equipotential curves are shown in fig. 1(b); note that they are very different from the Bernstein ellipse images in the lower plot of fig. 1(a).

Theorem 3 (Walsh [50, §4.5]).

Let be a set whose complement (including the point at infinity) is simply connected. Let extend analytically to a function analytic inside and on , for some . Then there is a sequence of polynomials of degree such that

where is a constant independent of and .

To apply this to (19), we need to know the largest region in which may be continued as an analytic function, which is in general difficult. In practical settings the panels will have been refined enough so that on each panel preimage is interpolated in the parameter to high accuracy, thus we may assume that any singularities of are distant from . Yet, in moving from the to plane, the shape of the panel itself can introduce new singularities in which will turn out to explain well the observed loss of accuracy, as follows.

Proposition 4 (geometry-induced singularity in analytic continuation of density).

Let the pullback be a generic function analytic at . Let where . Let be such that and is analytic at . Then generically is not analytic at .

Proof.

Since the derivative vanishes, the Taylor expansion of the parameterization at takes the form . If were analytic at , expanding gives , then inserting gives which must match, term by term, the Taylor expansion of the pullback . But generically , in which case there is a contradiction. ∎

Similar (more elaborate) arguments are known for singularities in the extension of Helmholtz solutions [33]. Here is an example of a singularity of the Schwarz function [14, 44] for the curve . Recall that the Schwarz function is defined by , which is the analytic reflection of through the arc . At points where , the inverse map , hence the Schwarz function, has a square-root type branch singularity. Figure 1(a) shows that for a moderately bent parabolic panel, the Schwarz singularity is quite close to the concave side. Note that the above argument relies on having a generic expansion, which we believe is typical; we show below that its convergence rate prediction holds well in practice.

In summary, smooth but bent panels often have a nearby Schwarz singularity, which generically induces a singularity at this same location in the analytic continuation of the density ; then the equipotential for at this location controls the best rate of polynomial approximation, placing a fundamental limit on the Helsing–Ojala convergence rate in . Figure 1(b) shows that, for a moderately bent panel, the close singularity is effectively shielded electrostatically by the concave panel, thus the convergence rate is very small (close to 1). In fig. 1(c) we test this quantitatively: for a parabolic panel and simple analytic pullback density we compare the maximum error in -term polynomial approximation on with the prediction from the above Schwarz–Walsh analysis. The agreement in rate is excellent over a range of curvatures. This also matches the loss of digits seen in the central column of figures 2 and 3: e.g. for even mild curvature , achieves only 5 digits, and only 10 digits.

Conjecture 5 (Schwarz singularities at focal points).

Note that in the parabola example of fig. 1, a square-root type Schwarz singularity appears near ’s highest curvature point, at distance on the concave side, where is the radius of curvature. This is also approximately true for ellipses, whose foci induce singularities [44, §9.4]. We also always observe this numerically for other analytic curves. Hence we conjecture that it is a general result for analytic curves, in an approximate sense that becomes exact as .

Figure 2: Spatial plot of relative errors in 2D quadratures for the Laplace double-layer (42) on a parabolic panel , , with smooth density , using nodes. Each row shows a different curvature . Left column: direct quadrature of section 2.1. Middle column: Helsing–Ojala (HO) quadrature of section 2.2. Right column: proposed singularity swap quadrature (SSQ) of section 2.4. The red is the Schwarz singularity of .
Figure 3: Same as fig. 2, after interpolating panel data to nodes.

2.4 Interpolatory quadrature using a real monomial basis

We now propose a “singularity swap” method which pushes the interpolation problem to the panel preimage , thus bypassing the rather severe effects of curvature just explained. Recalling that the target point is , we first write the integrals (17) and (18) in parametric form

(33)
(34)

where we have introduced the displacement function and the smooth function ,

(35)
(36)

Now let be the root of nearest to . (Unless the panel is very curved, there is only one such nearby root and it is simple.) Then has a removable singularity at , and hence it and its reciprocal are analytic in a larger neighborhood of than the above integrands. By multiplying and dividing by ,

(37)
(38)

which as we detail below can be handled as special cases of (17) and (18) for a new written in the plane. We have “swapped” the singularity from the to the plane. As fig. 1 illustrates, the preimage of the Schwarz singularity has a much higher conformal distance from than the actual singularity in the -plane has from , indicating much more rapid convergence.

Thus we now apply the Helsing–Ojala methods of section 2.2, but to in the plane. Namely, as before, let and , be the Gauss–Legendre quadrature for . The first integral in (37) is smooth, so direct quadrature is used. The second integral in (37) is evaluated via (21) and (29), setting , with the coefficients for the function found by solving the Vandermonde system as in section 2.2. The first term in (38) is smooth and plays the role of in (18), so its coefficients are found similarly. (38) is then approximated by (22) with (25)–(28).

Remark 6.

Since in this scheme the new flat “panel” is fixed, one may LU decompose the Vandermonde matrix once and for all, then use forward and back-substitution to solve for the coefficients given the samples of each of the two smooth functions, namely and , with only effort.

Remark 7.

With a very curved panel it is possible that more than one root of is relevant; see fig. 4. In the case where poles and need to be cancelled, we have derived recursion formulae for and which generalize those in section 2.2.1. However, we have not found these to be needed in practice, so omit them.

2.4.1 Finding the roots of

The only missing ingredient in the scheme just described is to find the nearest complex root satisfying

(39)

i.e. the preimage of the (complex) target point under the complexification of the panel map ; see fig. 4. We base our method on that of [3] (where roots were needed only for the purposes of error estimation). Let denote a degree polynomial approximant to on . Using the data at the Legendre nodes , forming is both well-conditioned and stable if we use a Legendre (or Chebyshev) expansion, and has an cost. See [3] for details on how to use a Legendre expansion. Continuing each basis function to gives a complex approximation

(40)

for which we seek roots nearest . Given a nearby starting guess, a single root of can be found using Newton’s method, which converges rapidly and costs per iteration. A good starting guess is , recalling that defined by (30) maps the panel endpoints to . With this initialization, Newton iterations converge to the nearest root in most practical cases. The iterations can, however, be sensitive to the initial guess when two roots are both relatively close to , and sometimes converge to the second-nearest root. As illustrated in fig. 4, this typically happens only when the target point is on the concave side of a very curved panel.

Figure 4: Illustration of the relationship between the roots of the displacement , to the left, and target points in 2D physical space, to the right. The point corresponds to only one nearby root, , while the point corresponds to two nearby roots, and . Note that only lies inside the Bernstein ellipse of radius , marked green, such that it requires singularity swapping. The point is the Schwarz singularity of the curve, marking the point where stops being single-valued.

A more robust, but also more expensive, way of finding the nearest root is to first find all roots of

at the same time, and then pick the nearest one. This can be done using a matrix-based method, which finds the roots as the eigenvalues to a generalized companion (or “comrade”) matrix

[8]. This has a cost that is if done naively, and if done using methods that exploit the matrix structure [5].

In order to determine whether Newton iterations or a matrix-based method should be used for finding the root , we suggest a criterion based on the distance to the nearest Schwarz singularity preimage of the panel, , as this is a measure of the size of the region where is single-valued. Let be a Bernstein radius beyond which direct Gauss–Legendre quadrature is expected to have relative error . E.g., ignoring the prefactor in (9) and solving for ,

(41)

is a useful estimate. Recalling (16), then , where we choose the constant , indicates that there may be more than one nearby root, and that it is safer to use a matrix-based root finding method. For each panel, can be found at the time of discretization, by applying Newton’s method to the polynomial , with the initial guess .

Remark 8.

By switching to matrix-based root finding when there is a nearby Schwarz singularity, we get a root finding process that is robust also for very curved panels. However, in practical applications we have not observed it to be necessary, e.g. for the results of section 5. It is on the other hand necessary in the examples in figs. 3 and 2, because we there test the specialized quadrature at distances well beyond where it is required.

Remark 9.

The root finding process can be made efficient by computing the interpolants and for each segment at the time of discretization, such that they can be reused for every target point .

2.4.2 Tests of improved convergence rate for curved panels

We now compare the three methods: i) direct Gauss–Legendre quadrature, ii) Helsing–Ojala complex interpolatory quadrature, and iii) the proposed real singularity swap quadrature. We evaluate the Laplace double layer potential from a single panel,

(42)

The panel is the same parabola as in fig. 1, i.e. , or in complex form, , and we explore the dependence on curvature . In complex form, which is necessary to apply the quadratures, the layer potential is (see appendix A)

(43)

Figs. 2 and 3 compare the three schemes for various curvatures, for and respectively. For all data is upsampled by Lagrange interpolation from (which is already adequate to represent it to machine precision). The error is in all cases measured against a reference solution computed using adaptive quadrature (integral in MATLAB with abstol and reltol set to eps), and the relative error computed against the maximum value of on the grid. To see how they behave when pushed to their limits, we deliberately apply the quadratures much further away than necessary, i.e. also in the region where direct quadrature achieves full numerical accuracy.

The direct errors (left column of each figure) have magnitudes controlled by the images of Bernstein ellipses under the parameterization, as explained in section 2.1. The Helsing–Ojala scheme (middle column) is much improved over direct quadrature, but, because of its convergence rates dependence (section 2.3) the reduction in accuracy is severe as curvature increases, although it can to some can extent be ameliorated by using . In addition, there is an error in the quadrature that grows with the distance from the panel, particularly for , due to amplification of roundoff error in the upward recurrence for the integrals . This was also noted in [24], where the suggested fix was to instead run the recurrence backwards for distant points, starting from a value of computed using the direct quadrature. This avoids the problem of catastrophic accuracy loss, but only because specialized quadrature is used where direct quadrature would be sufficient. The bottom middle plot of fig. 3 shows at best 10 accurate digits, and that only over quite a narrow range of distances, making the design of algorithms to give a uniform accuracy for all target locations rather brittle.

For the proposed singularity swap quadrature (right columns), the there is still a loss of accuracy with increased curvature, but the improvement over Helsing–Ojala is striking. The instability of upwards recurrence is also much more mild.

Remark 10.

The loss of accuracy with increased curvature can for the singularity swap quadrature be explained by considering the roots of the displacement function . We have “swapped” out the nearest root , so the region of analyticity of the regularized integrand is bounded by the second nearest root of , illustrated by in fig. 4. This limits the convergence rate of the polynomial coefficients , such that we may need to upsample the panel in order for the coefficients to fully decay. In the case of our parabolic panel, the second nearest root gets closer with increased curvature, which explains why upsampling to 32 points is necessary to achieve full accuracy in some locations. Note that for the upsampling to be beneficial, the coefficients for must be nonzero. This is only the case if the components of the integrand (, , ) are upsampled separately, before the integrand is evaluated at the new nodes.

3 Line integrals on curved panels in three dimensions

Since the logarithmic kernel is irrelevant in 3D, we care about the parametric form of (1),

(44)

where is an open curve in , and incorporates the density and possibly other smooth denominator factors in the kernel . Fixing the target , then introducing the 3D squared distance function (analogous to (10)) and the smooth function ,

(45)
(46)

we can write (44) as

(47)

This integrand has singularities at the roots of , which, since the function is real for real , come in complex conjugate pairs . How to find these roots is discussed shortly in section 3.2; for now we assume that they are known. As in 2D, if the line is not very curved, and the target is nearby, there is only one nearby pair of roots.

We construct a quadrature for (47) as in the 2D case (38), except that now there is a conjugate pair of near singularities to cancel. Thus we write (47) as

(48)

can be expected to be analytic in a much larger neighborhood of than the integrand. As in section 2.4, we now represent in a real monomial basis,

(49)

getting the coefficients by solving the Vandermonde system

with the matrix entries , where, as before, are the panel’s parameter Legendre nodes in . We fill by evaluating its entries . By analogy with (24), we set

(50)

which one can evaluate to machine accuracy using recurrence relations as described in the next section. Combining (50) and (49) gives the final quadrature approximation to (44),

(51)

The adjoint method of section 2.2.2 can again be used to solve for a weight vector whose inner product with approximates . As in section 2.4, the error in (51) is essentially due to the convergence rate of the best polynomial representation (49) on , which is likely to be rapid because of the absence of curvature effects (section 2.3).

3.1 Recurrence relations for 3D kernels on a straight line

The above singularity swap has transformed the integral on a curved line in 3D to (48), whose denominator corresponds to that of a straight line in 3D. Such upward recursions are available in [46, App. B], where they were used for Stokes quadratures near straight segments. We will here present them with some improvements which increase their stability for in certain regions of . We present the case of a single root pair , for orders , noting that, while we have derived formulae for double root pairs, we have found that they are never needed in practice. To simplify notation (matching notation for in [46, App. B]), we write

(52)

and for the distances to the parameter endpoints we write

(53)

Beginning with and , the integral (50) is

(54)

This expression suffers from cancellation for close to , and must be carefully evaluated. To begin with, it is more accurate in the left half plane, even though the integral is symmetric in , so the accuracy can be increased by evaluating it after the substitution . In addition, the argument of the second logarithm of (54) suffers from cancellation when and , even after the substitution. In this case we evaluate it using a Taylor series in ,

(55)

We achieve sufficient accuracy by applying this series evaluation to points inside the rhombus described by , evaluating the series to . For the upward recursions of [46] are stable, such that

(56)
(57)
(58)
(59)

For , the formula from [46] for contains a conditional statement for points on the real axis (), where the pair of two conjugate roots merge into a double root. In finite precision, this property causes a region around the real axis where the formula is inaccurate, namely two cones extending outwards from the endpoints . To get high accuracy there, we consider the integral with shifted limits,