In 1964 Donald Newman published a result that has attracted a good deal of attention among approximation theorists . Newman considered supremum norm approximation of on and showed that, whereas degree polynomial approximants offer at best convergence as , rational functions can achieve root-exponential convergence, that is, errors with . (The degree of a rational function is the maximum of the degrees of its numerator and denominator.) Newman’s construction crucially involved placing poles in the complex plane clustered exponentially near the singularity at , and he also showed that root-exponential convergence was the best possible. In the subsequent half-century, his result has been sharpened by approximation theorists, notably Herbert Stahl, and extended to other functions such as [13, 45, 46]. However, there appear to have been no attempts heretofore to exploit the root-exponential effect in scientific computing.
Here we introduce a method of this kind for solving Laplace problems on a 2D domain bounded by a piecewise smooth Jordan curve with corners, such as a polygon. In the simplest situation we have a Dirichlet problem with specified real boundary data ,
where we use complex notation for simplicity. The method approximates by the real part of a rational function,
with taking the form
Computing an optimal rational function is a difficult nonlinear problem in general [23, 36, 49], but our method is linear because it fixes the poles of the first sum in (3) a priori in a configuration with exponential clustering near each corner. We speak of this as the “Newman” part of (3), concerned with resolving singularities. The second sum is simply a polynomial, with the expansion point taken as a point roughly in the middle of , and we call this the “Runge” term, targeting the smoother part of the problem. (The observation that functions analytic in a neighborhood of a simply connected compact set in the complex plane can be approximated by polynomials with exponential convergence dates back to Runge in 1885 [12, 41, 50].) With and fixed, our method finds coefficients and by solving a linear least-squares problem on a discrete subset of points on the boundary , which are also exponentially clustered in a manner reflecting that of .
One might imagine that achieving root-exponential convergence would require a delicate choice of poles in (3). However, virtually any exponential clustering is in fact sufficient, provided it scales with as , and Section 2 is devoted to presenting theorems to establish this claim. Quite apart from their application to Laplace problems, we believe these results represent a significant addition to the approximation theory literature, as well as shedding light on the clustered poles observed experimentally in  and . Section 3 describes our algorithm, which depends on placing sample points on the boundary with exponential clustering to match that of the poles outside. Section 4 presents examples illustrating its remarkable speed and accuracy, and Section 5 comments on variants such as Neumann boundary conditions and multiply connected domains. We are in the process of developing codes to make the method available to others, and an analogous method for the Helmholtz equation will be presented in a separate publication.
A brief announcement of the new method, without theorems or proofs, was published in . Our method is a variant of the Method of Fundamental Solutions (MFS), and a discussion of its relationship with MFS and other methods is given in the final section.
2 Root-exponential convergence theorems
This section will show that root-exponential convergence is achievable for all kinds of domains provided that poles are clustered exponentially near corner singularities with a spacing that scales as . The result is sharp in the sense that faster than root-exponential convergence is in general not possible for rational approximations of Laplace problems with corner singularities. To prove this, it is enough to consider the problem (1) with as the upper half of the unit disk and . If for for some rational function , then in particular for . But for , is itself a rational function of . Thus by Newman’s converse result, the approximation can improve no faster than root-exponentially with .
Our proofs combine the Cauchy integral formula, to decompose global problems into local pieces, and the Hermite integral formula for rational interpolation. The rational version of the Hermite formula, which goes back at least to J. L. Walsh in the 1930s[56, sec. 8.1], is well known by approximation theorists, and it is possible that results like ours can be found in the literature. Examples of related works are  and . However, the emphasis in the literature is on relatively delicate analysis in relatively specific settings. (This comment applies to Theorem 1 of our own paper , where we used an exponential change of variables coupled with the equispaced trapezoidal rule to prove root-exponential convergence for approximations of on a half-disk.) Here we will make more general arguments. The root-exponential rate is a consequence of the balance of local accuracy at a corner, which requires poles to be placed closely nearby, and global accuracy further from the corner, which requires poles to be spaced close together. This is the same balance that leads to root-exponential convergence of exponential quadrature formulas [47, 52].
We begin with a statement of the Hermite formula. A rational function is said to be of type if it can be written in the form where and are polynomials of degrees and , respectively.
Hermite integral formula for rational interpolation. Let be a simply connected domain in bounded by a closed curve , and let be analytic in and extend continuously to the boundary. Let interpolation points and poles anywhere in the complex plane be given. Let be the unique type rational function with simple poles at that interpolates at . Then for any ,
The theorem is presented as Theorem 2 of Chapter 8 of Walsh’s monograph , with a proof based on residue calculus. Walsh quoted the same formula in an earlier paper of his as equation (12.1) of . We do not know whether he was the first to develop this result. For a more recent discussion, see .
The application of (4
) for accuracy estimates goes back to the analysis of polynomial interpolants by Méray and Runge at the end of the 19th century[49, chap. 11], and in the case of rational functions, to Walsh [55, 56]. With good choices of and , may be much larger for than for . In such a case, the ratio in (4) will be very small, and this enables one to use the integral to bound . For the estimate of our first theorem, we will apply just this reasoning.
Note that unlike the proofs of our theorems, our algorithms are not based on interpolation. The purpose of the theorems is to establish the existence of root-exponentially good approximants, not to construct them. That will be done by the much more flexible method of least-squares fitting on the boundary.
We begin with a rational approximation problem on the slice-of-pie region sketched on the left in Figure 1, starting from the definition
and assume a function is given that is bounded and analytic in the slit disk . For example, might have a or singularity at (). As shown in  and , these are the standard singularities that arise in Laplace problems bounded by analytic curves meeting at corners. The following theorem, which is close to Theorem 2.1a of , approximates on a wedge with half-angle . However, the result actually holds for any , as we discuss at the end of the section. The notation denotes the supremum norm over .
Let be a bounded analytic function in the slit disk that satisfies as for some , and let be fixed. Then for some depending on but not , there exist type rational functions , such that
as for some , where . Moreover, each can be taken to have simple poles only at
where is arbitrary.
Fix and define poles by (8) and interpolation points by
This implies and for for any , and therefore by (5)
Here and until the final paragraph of the proof, the constant has not yet been chosen, and our statements about apply regardless of its value.
Let be the type rational interpolant to as in Lemma 2. We need to show that there are constants , independent of , such that
for . To do this, we apply the integral (4), with as the boundary of the slit disk. (The two sides of are distinct components of . We may assume without loss of generality that extends continuously to the boundary, for if not, we may integrate over contours in that come arbitrarily close to .)
Equation (4) becomes
and we need to show that and are both bounded by .
To bound , we note that for , we have , and thus by (10),
By the Hölder inequality, this means that to bound , it is enough to bound the integral of over . Now in analogy to (10) we have
for , since and for in (5). So it is enough to bound the integral of , and by the assumption on the behavior of as , this amounts to an integral of the type
Since , this is of order , independently of as required.
To bound in (12), we note that although grows large pointwise on for and , its integral over is bounded, independently of . Thus by the Hölder inequality again, it is enough to show that is root-exponentially small uniformly for and . To do this, we first observe that is root-exponentially small for . For , this conclusion follows from (10). For , we note from (5) that is a product of factors of size at most . To show that the product is root-exponentially small, it suffices to show that on the order of of these factors are bounded in absolute value by a fixed constant independent of . For each , a suitable set of factors with this property are those with . From (8) it follows that the number of these factors grows in proportion to as , and from elementary geometry it follows that each factor is bounded by a constant (dependent on but not ) as required.
We then have to balance the size of against that of . For in the left half-plane, this is immediate since (13) ensures that is not small (more precisely, (13) weakens slightly to for all in the left half-plane). For in the right half-plane, we have to be more careful since is root-exponentially small like , and this is where the constant comes into play. Given , let us discard all poles with and interpolation points with (both for ) from the rational interpolation problem. The remaining quotients defining are each of absolute value no smaller than , and their product is no smaller than , the exponent here being rather than because the ratios approach geometrically as a function of , so that, loosely speaking, only the first of them affect the size of their product. This implies that the root-exponential constant of in (11) satisfies . So for sufficiently small , the rate of root-exponential decrease of as exceeds that of , making root-exponentially small.
Theorem 2 is the local assertion, establishing root-exponential resolution of corner singularities. The next step is to show that global approximations can be constructed by adding together these local pieces, with a polynomial term to handle smooth components away from the corners. We make the argument for the case in which is a convex polygon (with internal angles ), as sketched in Figure 3, although in fact we believe convexity is not necessary.
Let be a convex polygon with corners , and let be an analytic function in that is analytic on the interior of each side segment and can be analytically continued to a disk near each with a slit along the exterior bisector there. Assume satisfies as for each for some . There exist degree rational functions , such that
as for some . Moreover, each can be taken to have finite poles only at points exponentially clustered along the exterior bisectors at the corners, with arbitrary clustering parameter as in , so long as the number of poles near each grows at least in proportion to as .
We represent as a sum of terms,
each defined by a Cauchy integral
over a different contour within the closure of the region of analyticity of . For , consists of the two sides of an exterior bisector at , and for , is a curve that connects the end of the slit contour at vertex to the beginning of the slit contour at vertex . See Figure 3. The decomposition (15) holds since the sum of the contours winds all the way around while remaining in the region of analyticity of . For the mathematics of Cauchy integrals over open arcs, see Chapter 14 of .
Consider first a function defined by the Cauchy integral (16) over an arc connecting two adjacent slits. This function is analytic in any open region disjoint from , and in particular, in an open region containing the closure of . By Runge’s theorem [12, 41], it follows that can be approximated by polynomials on with exponential convergence. This is the smooth part of the problem.
For the “Newman” part, consider a function defined by the integral over a contour consisting of two sides of a slit at associated with a sector of half-angle . Again, is analytic in any open region disjoint from . In particular, it is analytic in a slit-disk region like of Figures 1 and 2, but tilted and translated to lie around the slit and of arbitrary scale . If is the parameter in the proof of Theorem 2 for the given angle , then taking to be greater than the diameter of divided by is enough for the argument of Theorem 2 to ensure the existence of rational approximations to converging root-exponentially on . We note that this argument requires as . This we can ensure by subtracting a constant from , to be absorbed in the polynomial term of the approximation (15). The necessary continuity condition on then follows from the continuity assumption on since and differ by terms that are analytic at .
Adding the polynomial and rational pieces together, we get rational approximations to with root-exponential convergence in .
With Theorem 2 we have established the existence of root-exponentially convergent rational approximations to an analytic function on a convex polygonal region , provided the singularities of on the boundary are just branch points at the corners. We now discuss three extensions.
First of all, nothing in our arguments has utilized the straightness of the sides of . The same results hold if is bounded by analytic arcs meeting at corners, which is the setting of the paper by Wasow mentioned earlier  as well as of related papers by Lewy  and Lehman , who generalized from the Laplace equation to other PDEs. The case of analytic arcs meeting with angle zero, a cusp, is also not a problem, and indeed in our context is indistinguishable from the case of a finite angle, in view of the assumption of analyticity in a slit disk around each corner.
Secondly, our application of these results to solve the Laplace equation will concern harmonic functions rather than analytic ones. There is little difference in the two settings, however, since any harmonic function on is the real part of an analytic function on that is uniquely determined up to an additive constant (since is simply connected). The only technical issue to be considered is that of regularity. If is harmonic in and can be harmonically extended across the boundaries and around slits at each corner, with = for some for each , does this imply the same properties for and thus for the function to which we would like to apply Theorem 2? The answer is yes, as can be shown by a conformal transplantation from to a disk or a half-plane following by application of standard theory of the Hilbert transform, which maps a function to its harmonic conjugate. See Theorem 14.2c of .
The third issue to be considered is non-convex domains. In fact, we believe Theorem 2 is valid without the assumption and Theorem 2 is valid without the assumption of convexity. We will not attempt a proof, but will outline the very interesting mathematics that could be applied to this end.
The reason why Theorem 2 is restricted to convex corners is that its proof makes use of interpolation points placed along the interior bisector , a convenient configuration for a simple argument. To handle more general cases, better choices of interpolation points are needed. One interesting choice is to place points with exponential clustering not on the bisector, but on the two sides of the sector. By this means one can extend Theorem 2 to all , as illustrated in Figure 4. That is, one can guarantee the existence of rational approximations that converge root-exponentially near the singular corner of a non-convex sector of a disk.
For a generalization of Theorem 2 to arbitrary nonconvex polygons or their curved analogues, however, more is needed. An arbitrary region of this type might have a complicated shape like an S or a spiral, and to handle such cases, we need to construct rational approximations of each corner function that are accurate globally in , not just in a sector. To achieve this, the idea we need is to let the interpolation points distribute themselves in a configuration that is approximately optimal in the sense of minimizing a certain electrostatic energy associated with the poles and interpolation points. Such ideas came to the fore in approximation theory with the work of Gonchar and his collaborators beginning shortly after Newman’s paper appeared [13, 14, 32, 42, 45]. Given any choice of and , define the associated potential function by
which is related to the function of (4) by
In rational approximation theory , it is usual to let both and vary to find near-optimal approximants by minimizing the energy
For our purposes, however, the poles are fixed a priori in a configuration with exponential clustering near the corners. The terms involving accordingly contribute just a constant, and the issue reduces to the adjustment of the interpolation points to approximately minimize the energy
By means of such a minimization, rational approximations can be constructed that are good throughout a nonconvex region with corners. The convergence rate will be root-exponential, though with a suboptimal constant since the poles have not been chosen optimally.
At the end of seven pages of perhaps dense mathematics we remind the reader that none of these details of Hermite integrals, Cauchy integrals, and potential theory are part of our algorithm, which works simply by solving a linear least-squares problem. Their only purpose is to justify the algorithm theoretically.
3 The algorithm
Here is our algorithm for solving (1). At present our codes are exploratory, but we aim to develop more robustly engineered software in the near future.
and RHS vector
1. Define boundary , corners , boundary function , and tolerance .
2. For increasing values of with approximately evenly spaced:
2a. Fix poles clustered outside the corners;
2b. Fix monomials and set ;
2c. Choose sample points on boundary, also clustered near corners;
2d. Evaluate at sample points to obtain matrix
2e. Solve the least-squares problem for the coefficient vector ;
2f. Exit loop if or if is too large or the error is growing.
3. Confirm accuracy by checking the error on a finer boundary mesh.
4. Construct a function to evaluate based on the computed coefficients .
and RHS vector;
If is the largest value of used in the computation, then the largest matrix will be of row and column dimensions , corresponding to an operation count in step (2e) of . If we assume , this gives an overall operation count for the algorithm of
These exponents look daunting, and we are investigating methods of speeding up the linear algebra. (Both domain decomposition by corners and multiscale decomposition by space scales offer possibilities.) However, the constants in the “” are small, and small- or medium-scale problems often fall short of the asymptotic regime of cubic linear algebra in any case. As we shall show in the next section, our MATLAB implementation typically solves problems with and in a fraction of a second on a desktop machine, with the accuracy guaranteed all the way up to the corners.
We now give details of each of the steps.
1. Define boundary , corners , boundary function , and tolerance . If is a polygon, these setup operations are straightforward, and for more complicated domains, any convenient representation of the boundary can be used. Concerning , our usual habit is to work by default with the value , which leads to solutions very often in less than a tenth of a second. The estimate (20) would suggest that tightening to should make a computation times slower, but in practice, for problems of modest scale, the slowdown is usually more like a factor of .
2a. Fix poles clustered outside the corners. In all the experiments of this paper, following a rule of thumb derived from numerical experience, poles are placed near each salient corner and poles at each reentrant corner (i.e., with internal angle ). In the version of the paper originally submitted for publication, the poles were clustered according to (8) with
. Subsequent numerical experience showed that the numbers of degrees of freedom could be reduced aboutby modifying the prescription to
with , and this is the formula used for the numerical results presented here. Since the spacing still scales with , our theoretical guarantee of convergence still applies.
2b. Fix monomials and set . For simplicity, we usually take . From the point of view of asymptotic theory one could get away with , but little speed would be gained from such fine tuning and it would risk delaying the asymptotic convergence in cases dominated by the smooth part of the problem rather than the corner singularities. The explanation of the formula for is that this is the number of real degrees of freedom in (3), since the imaginary part of the constant term can be taken to be zero.
2c. Choose sample points on boundary, also clustered near corners. An essential feature of our algorithm is that sample points on the boundary are clustered exponentially near the corners, like the poles . For a Laplace solution to accuracy , for example, there will be poles at distances on the order from the corners, and tracking their effect will require resolution of the boundary on the same scale. Uniform sampling of the boundary is out of the question, since it would lead to matrices with on the order of rows. Indeed, it would not even be a good idea mathematically, since our algorithm relies on the discrete least-squares norm over the boundary approximating the continuous supremum norm, and this property would fail if nearly all of the sample points lay far from the corner singularities.
Fortunately, experiments show that successful results do not depend on fine details of the boundary sampling scheme; and in any case, step (3) of the algorithm gives the chance to confirm the accuracy of a computed approximation. The following is the scheme we have used for our computations. Each pole is associated with two free parameters of the approximation, the coefficients multiplying the real and imaginary parts of . For each we introduce six sample points on the boundary . If is the corner near , define . We take the six sample points to be the points at distances , , and from along the boundary arcs connecting with and . If any of these points lie beyond the end of the boundary arc, they are of course not included.111This kind of boundary sampling might have simplified the computations of .
2d. Evaluate at sample points to obtain matrix and -vector . Both and are real, with corresponding to samples of the real function and the columns of corresponding to samples of the real and imaginary parts of the poles and monomials. In a typical small- or medium-scale problem, might have 200–2000 columns and about three times as many rows.
2e. Solve the least-squares problem for the coefficient vector . To solve the system, first we rescale the columns to give them equal 2-norms. (Without rescaling, columns corresponding to values very close to a corner would have some very large entries.) We then apply standard methods of numerical linear algebra to solve the least-squares problem: in MATLAB, a call to the backslash operator, which applies QR factorization with column pivoting. The matrices are often highly ill-conditioned, a phenomenon investigated by various authors, related to the general notions of over-complete bases and frames [5, 21, 24, 26]; see in particular section 3 of . We make some remarks about the ill-conditioning of at the end of Section 4 and intend to investigate this matter further in the future.
2f. Exit loop if or if is too large or the error is growing. For problems that are not too difficult, convergence to the tolerance usually happens quickly. If is very small, however, failures become more likely, and our code includes termination conditions for such cases. In particular, our current code becomes unreliable if , though we hope to improve this figure with further investigation.
3. Confirm accuracy by checking the error on a finer boundary mesh. One of the appealing features of this numerical method is that the solution it delivers is exactly a harmonic function (apart from rounding errors), which implies by the maximum principle that the error in the interior is bounded by its maximum on the boundary. Thus one can get a virtually guaranteed a posteriori error bound by sampling the boundary error finely — say, twice or four times as finely as the least-squares grid. Using methods of validated numerics, it should also be possible to generate a truly guaranteed bound , though we have not pursued this idea.
4. Construct a function to evaluate based on the computed coefficients . Once the coefficient vector for the numerical solution to the Laplace problem has been found, many computer languages make it possible to construct an object r that evaluates the solution at a single point or a vector or matrix of points with a syntax as simple as r(z). In MATLAB this can be done by creating r as an anonymous function.
4 Numerical examples
We now present two pages of examples to illustrate the Laplace solver in its basic mode of operation: a Dirichlet problem on a polygon with continuous boundary data. The next section comments on variants such as Neumann or discontinuous boundary data. Over the course of the work leading to this article, we have solved thousands of Laplace problems.
Each example consists of a pair of figures: a convergence curve on the left and a contour plot of the solution on the right. Consider the first example of Figure 5, a regular pentagon. The convergence curve shows the supremum-norm error measured over the least-squares grid on the the boundary on a log scale against , where is the total number of degrees of freedom (the column dimension of ). The straight shape of the curve confirms root-exponential convergence down to the specified tolerance, . For this example the tolerance has been reached with after seconds of desktop time. The code also does a thousand point evaluations of the resulting solution to estimate the time per point required, which comes out as microsecond. Thus one could evaluate the computed solution at ten thousand points, all with 8-digit accuracy, in a hundredth of a second.
The red circle at the end of the convergence curve corresponds to a measurement of the boundary error on a finer grid than is used for the least-squares solve (twice as fine), following step 3 of the algorithm as described in the last section. The fact that it matches the final black dot confirms that the least-squares grid has been fine enough to serve effectively as a continuum.
The image on the right shows the computed solution. In this set of examples, the boundary function is taken to be a different analytic function on each side taking the value zero at the corners to ensure continuity. (The boundary functions are smooth random functions of the kind produced by the Chebfun randnfun command , but this is just for convenience, not important to our algorithm.) This figure also shows the poles exponentially clustered outside each corner as red dots. The distances go down to the same order of magnitude as the accuracy, , so most of the poles are indistinguishable in the figure. There is also a black dot at the origin indicating the expansion point of the polynomial part of (3). The heading reports the dimension of the final matrix used, whose least-squares solution dominates the computing cost.
The remaining examples of Figure 5 show an L-shape, a snowflake, and an isospectral drum. Root-exponential convergence is evident in each case. Each problem is solved in less than a second, and the solutions obtained can be evaluated in less than 5 microseconds per point.
The second page of examples, Figure 6
, first shows random polygons with 6, 10, and 15 vertices. The vertices of each polygon lie at uniformly spaced angles about a central point, with the radii taken as independent samples from a uniform distribution. The figures show successful computations, but the times slow down cubically when there are many vertices. For the 15-gon, one would do better to use the advanced integral equations methods of Serkh and Rokhlin or Helsing and Ojala [18, 19, 38], who solve problems with thousand of corners with their method of recursive compressed inverse preconditioning applied on graded meshes. On the other hand, loosening the tolerance in Figure 6 speeds up the computations greatly. With instead of , the solve times for the last two examples improve from and seconds to and seconds.
The last image of Figure 6 attempts a computation with all the poles shifted a distance away from their corners. The fast convergence is now lost completely, confirming the decisive importance of Newman’s phenomenon and of our analysis in Section 2.
These figures show the speed and accuracy of the rational approximation algorithm. One could look deeper, however, to assess its numerical properties and limitations. In particular, it was mentioned in the last section that our current implementation cannot reliably compute solutions with accuracy better than . It is natural to ask, where does this limitation come from, and could the situation be improved? At present, our understanding is not sufficient to attempt a proper discussion of this question. Certainly a relevant fact is that our least-squares problems lie in the regime where the matrices are hugely ill-conditioned and yet reasonable fits are obtained anyway. To illustrate this, Figure 7 shows the second example of Figure 6, but carried to higher accuracy. Instead of six data points, there are now 13, and one sees stagnation around accuracy . The plot on the right shows that long before this point, the matrix has reached a condition number on the scale of the reciprocal of machine precision, though the norms of the coefficient vectors remain under control. This is a familiar effect with overcomplete bases, investigated by various authors over the years [5, 21, 24, 26]. We believe that further analysis will provide an understanding of how these effects play out in the present context and suggest modifications to bring our results closer to machine precision. At the same time, it is worth emphasizing that without such analysis, we still get fast solutions to excellent accuracy.
We now mention a number of variations on the basic problem considered so far.
Curved boundaries. There is nothing new to be done here. The method we have described works with curved boundaries, the only difficulty being that the programming is less straightforward since the definition of the geometry involves more than just line segments. Fig. 8 shows the solution to ten-digit accuracy of a Dirichlet problem of this kind on a domain bounded a circular arc and an elliptical arc. The strings of poles have been placed along lines oriented at arbitrary angles to illustrate that using precisely the external bisector is not important.
Neumann boundary conditions. Little changes here; the boundary matching now involves derivatives as well as function values. For a rigorous a posteriori error bound, one will have to make sure to use a formulation of the maximum principle that applies with the given boundary conditions.
Transmission problems. Some problems involve transmission of a signal from one domain to another with a matching condition at the interface, often involving derivatives. In this case we would use distinct rational representations in the different regions.
Discontinuous boundary conditions. In applications, Laplace problems very often have discontinuous boundary conditions; a prototypical example would be a square with boundary values on three sides and on the fourth. Mathematically, this is no problem at all, and the same applies to our numerical algorithm. The change that must be introduced is that convergence of the error to zero in the supremum norm is no longer possible. Instead, we find it convenient to guide the computation by a supremum norm weighted by distance to the nearest corner. Again one must be careful in applying the maximum principle.
Long domains. The “Runge part” of (3) consists of a monomial expansion about a point . If is large and is highly elongated, one is likely to run into trouble because this is an exponentially ill-conditioned basis. Approximation theory suggests turning to the basis of the Faber polynomials for , but this would introduce an auxiliary computation as difficult as the original Laplace problem. We believe that simpler methods would be effective in many cases, such as a switch to a suitably scaled and shifted Chebyshev basis if is elongated mainly in one direction. Impressive solutions for such problems based on integral equations can be found in .
Poisson equation. For a problem with boundary conditions for , as is very well known, one can first find any function such that in , with arbitrary boundary values. The correction needed to find is now the solution of the Laplace problem with boundary conditions . This Laplace problem will have corner singularities as usual.
Helmholtz equation. Here, it appears that rational functions can be generalized to series involving Hankel functions, though the mathematics brings significant challenges. A preliminary announcement appeared in , and development is underway.
Multiply connected domains. In a multiply connected domain, poles will need to be placed within the holes as well as in the exterior (and also logarithmic terms associated with the Runge part of (3) [2, 50]). In the context of rational functions, as observed by Runge himself [12, 41], this is not a very significant change. Of course, as always, one will have to make sure that no poles fall within . Illustrations of series methods for Laplace problems in multiply connected smooth regions, without exponential clustering of poles, can be found in .
Slits. A domain with a slit, that is, a corner with interior half-angle , appears problematic since one can hardly place poles on the slit. However, local transformations can get around such a problem. If has a boundary slit coming in to along the negative real axis, for example, then the local change of variables transforms the slit to a segment of the imaginary axis. One can now introduce poles in the -plane to get terms for a Newman-style expansion, with corresponding to . Such transformations are just local, changing only the nature of some of the terms appearing in (3); the least-squares problem is still solved on the boundary in the -plane.
Faster than root-exponential convergence. As mentioned at the beginning of Section 2, faster than root-exponential convergence is not possible for Laplace problems in regions with corners, if accuracy is measured by the maximum norm over . However, as pointed out in Myth 3 of  and Chapter 16 of , best approximations may not be optimal! The point here is that the insistence on global supremum norm error bounds may have terribly adverse effects on an approximation throughout almost all of a domain. Specifically, the discussion of discontinuous boundary conditions above suggests a possibility that might be very appealing to users. Even for a problem with continuous boundary data, one could measure convergence by a supremum norm weighted by distance to the nearest corner. This could make it possible to accelerate convergence of our algorithm from root-exponential to nearly exponential, yet still guarantee errors bounded by a tolerance divided by the distance to the nearest corner. We plan to include an option along these lines in our software.
The numerical solution of PDEs has been brought to a very highly developed state by generations of mathematicians and engineers since the 1950s. The Laplace equation in 2D is as basic a problem in this area as one could ask for, and many methods can handle it. Getting high accuracy in the presence of corner singularities, however, still requires care, as we found from the responses to a challenge problem involving the L-shaped domain that we posed to the NA Digest email discussion group in November, 2018 [16, 51]. The most common approach to such a problem would be to use the finite element method (FEM), for which there is widely-distributed freely-available software such as deal.II, FEniCS, Firedrake, IFISS, PLTMG, and XLiFE++ [1, 4, 9, 34, 39, 58]. However, it is not straightforward to get, say, 8 digits of accuracy by such methods.222Just one respondent to our posting solved the problem to more than 8 digits of accuracy by FEM, Zoïs Moitier of the University of Rennes 1, who used XLiFE++  with finite elements of order 15. For this, it may be preferable to use the more specialized tools of -adaptive FEM . Such methods are powerful, but we believe they cannot compete for speed and simplicity for these simple planar problems, and in particular, we are unaware of any FEM software that can match the performance shown in Figures 5 and 6.
The other highly developed approach for Laplace problems is boundary integral equations (BIE), more restricted than FEM but very powerful when applicable . Whereas FEM constructs a two-dimensional representation of the solution of a PDE, BIE represents it via an integral over a one-dimensional density function on the boundary. This speeds up the computations greatly, and in addition, one can often avoid the ill-conditioned matrices associated with FEM, especially through formulations involving Fredholm integral equations of the second kind. Evaluations of a BIE solution require numerical quadrature, which poses challenges near the corners and also near the boundary away from the corners, but experts have developed quadrature methods that are fast and accurate [7, 27, 44]. We have already mentioned the method of Helsing and Ojala with its ability to solve problems with thousands of corners [18, 19, 38]. There is no doubt that integral equations constitute the most powerful tool currently available for 2D Laplace problems.
Our method, which we think of as constructing a zero-dimensional representation of the solution, belongs to the category of techniques known as the Method of Fundamental Solutions (MFS) [5, 6, 10, 25, 26, 33]. This refers to any method in which a function is found satisfying given boundary conditions by expansion in a series of free space solutions with point singularities. Sometimes the points are fixed in advance, making the calculation linear as in our method, and in other cases they are selected adaptively. Such ideas go back at least to Kupradze in the 1960s [29, 28] and are a special case of so-called Trefftz methods [21, 22]. One difference of what we are proposing here from the usual is that MFS expansions usually involve monopoles (= logarithmic point charges) rather than dipoles (= simple poles of a meromorphic function). A second difference is that the MFS literature does not make the connection with rational approximation theory, and in particular, does not normally use exponentially clustered poles to achieve root-exponential convergence near singularities, though steps in this direction can be found in . Another related method is that of Hochman, et al. , involving rational functions with a nonlinear iteration.
It is interesting to consider the conceptual links between boundary integrals and MFS methods such as our rational functions. MFS methods seek to expand the solution in a finite series of point singularities lying outside the problem domain. Boundary integral equations, by contrast, seek a continuum distribution of charge singularities (single-layer for monopoles, double-layer for dipoles) precisely along the boundary. This formulation on the boundary has the advantage that it is connected with highly developed and powerful mathematical theories of integral operators, while point charge approaches are less fully understood. On the other hand the BIE approach has the flavor of interpolation rather than least-squares, requiring exact determination of a precisely determined density function, quite a contrast to the great simplicity and convenience of least-squares problems of the point charge methods, which take advantage of “overcomplete bases.” For example, Walsh and his student Curtiss333Donald Newman was also a student of Walsh’s, receiving his PhD from Harvard at age 22. were interested in solving Laplace problems by interpolatory boundary matching, and ended up giving a good deal of their effort to the interpolation rather than the approximation theory [8, 54]. The much simpler idea of using least-squares boundary matching goes back to Moler and undoubtedly others in the 1960s if not before [35, 50], but has been surprisingly slow to find a place in mainstream numerical PDEs.
As commented in , we see a historical dimension in the relative lack of attention given to global methods of a spectral flavor for solving PDEs, such as we have proposed here, that take advantage of the analyticity of problems arising in applications. In the 19th century, mathematicians’ focus was mainly on analytic functions, possibly with branch points. In the 20th century, however, interest in that kind of function theory largely gave way to interest in real analysis, with great attention being given to detailed study of questions of regularity, that is, of precise degrees of smoothness of functions and their consequences. This trend is strikingly on display in the standard work on elliptic PDEs in domains with corners, by Grisvard , which begins with 80 pages on Sobolev spaces and proceeds to develop an extensive theory in that setting. The numerical PDE community followed this trend, with the prevailing discourse again becoming analysis in terms of Sobolev spaces . Although in principle one can analyze even analytic problems with these tools, in practice they bring a bias toward discretizations tuned to problems of limited smoothness.444For example, in the FEM literature as in Grisvard’s book one encounters the view that a PDE problem with a salient corner is nonsingular whereas the same problem with a reentrant corner is singular. Such a distinction would not make sense from the point of view of classical function theory. Yet many problems of interest in applications are analytic or piecewise analytic.
Turning to future prospects, we note that the 2D Laplace equation is very special. We have begun exploration of generalizations to the Helmholtz equation , and other problems such as the biharmonic equation can also be investigated. The extension to 3D is a bigger issue to be considered. Here, as in 2D, there are MFS papers going back many years. The question is whether a new more focussed connection with approximation theory near singularities can bring a new efficiency, as we have shown here for 2D problems, and we are open-minded on this question.
In this paper we have introduced a new approach to numerical solution of certain PDEs, globally accurate and extremely simple, requiring no preliminary analysis of corner singularities. The method is very young, and there are innumerable questions to be answered and details to be improved in further research. Since the method exploits the same mathematics that makes lightning strike trees and buildings at sharp edges, we like to think of it as a “lightning Laplace solver.”
We are grateful for advice to Alex Barnett, Timo Betcke, Leslie Greengard, Dave Hewett, Daan Huybrechs, Yuji Nakatsukasa, Vladimir Rokhlin, Kirill Serkh, André Weideman, and an anonymous referee. We have also benefited greatly from Bernhard Beckermann’s extensive knowledge of the rational approximation literature.
-  G. Alzetta, et al., The deal.II Library, Version 9.0, J. Numer. Math., 26 (2018), pp. 173–183; see also https://www.dealii.org.
-  S. Axler, Harmonic functions from a complex analysis viewpoint, Amer. Math. Monthly, 93 (1986), pp. 246–258.
-  T. Bagby and P. M. Gauthier, Approximation by harmonic functions on closed subsets of Riemann surfaces, J. d’Analyse Math., 51 (1988), pp. 259–284.
R. E. Bank,
PLTMG: A software package for solving elliptic partial differential equations, Users’ Guide 13.0,Dept. Math., U. Cal. San Diego, 2018.
-  A. H. Barnett and T. Betcke, Stability and convergence of the method of fundamental solutions for Helmholtz problems on analytic domains, J. Comput. Phys., 227 (2008), pp. 7003–7026.
-  A. Bogomolny, Fundamental solutions method for elliptic boundary value problems, SIAM J. Numer. Anal., 22 (1985), pp. 644–669.
-  J. Bremer, V. Rokhlin, and I. Sammis, Universal quadratures for boundary integral equations on two-dimensional domains with corners, J. Comp. Phys., 229 (2010), pp. 8259-8280.
-  J. H. Curtiss, Interpolation by harmonic polynomials, J. SIAM, 10 (1962), pp. 709–736.
-  H. C. Elman, A. Ramage, and D. J. Silvester, IFISS: A computational laboratory for investigating incompressible flow problems, SIAM Rev., 56 (2014), pp. 261–273.
-  G. Fairweather and A. Karageorghis, The method of fundamental solutions for elliptic boundary value problems, Adv. Comput. Math., 9 (1998), pp. 69–95.
-  S. Filip, A. Javeed, and L. N. Trefethen, Smooth random functions, random ODEs, and Gaussian processes, SIAM Rev., 61 (2019), pp. 185–205.
-  D. Gaier, Lectures on Complex Approximation, Birkhäuser, 1987.
-  A. A. Gončar, On the rapidity of rational approximation of continuous functions with characteristic singularities, Math. USSR-Sbornik, 2 (1967), pp. 561–568.
-  A. A. Gončar, Zolotarev problems connected with rational functions, Math. USSR-Sbornik, 7 (1969), pp. 623–635.
-  A. Gopal and L. N. Trefethen, Representation of conformal maps by rational functions, Numer. Math., 142 (2019), pp. 359–382.
-  A. Gopal and L. N. Trefethen, New Laplace and Helmholtz solvers, Proc. Nat. Acad. Sci., pnas.1904139116, 2019.
-  P. Grisvard, Elliptic Problems in Nonsmooth Domains, SIAM, 2011.
-  J. Helsing, Solving integral equations on piecewise smooth boundaries using the RCIP method: a tutorial, arXiv:1207.6737v9, 2018.
-  J. Helsing and R. Ojala, Corner singularities for elliptic problems: integral equations, graded meshes, quadrature, and compressed inverse preconditioning, J. Comp. Phys., 227 (2008), pp. 8820–8840.
-  P. Henrici, Applied and Computational Complex Analysis, v. 3, Wiley, 1986.
-  R. Hiptmair, A. Moiola, and I. Perugia, A survey of Trefftz methods for the Helmholtz equation, in G. R Barrenechea, et al., Building Bridges: Connections and Challenges in Modern Approaches to Numerical Partial Differential Equations, Lect. Notes Comp. Sci. Engr., 114, Springer, 2016, pp. 237–278.
-  R. Hiptmair, A. Moiola, I. Perugia and Ch. Schwab, Approximation by harmonic polynomials in star-shaped domains and exponential convergence of Trefftz -DGFEM, ESAIM: Math. Model. Numer. Anal., 48 (2014), pp. 727–752.
-  A. Hochman, Y. Leviatan, and J. K. White, On the use of rational-function fitting methods for the solution of 2D Laplace boundary-value problems, J. Comp. Phys., 238 (2013), pp. 337–358.
-  D. Huybrechs and A.-E. Olteanu, An oversampled collocation approach of the wave based method for Helmholtz problems, Wave Motion, 87 (2019), pp. 92–105.
-  M. Katsurada and H. Okamoto, A mathematical study of the charge simulation method I, J. Fac. Sci. Univ. Tokyo Sect. IA Math., 35 (1988), pp. 507–518.
-  T. Kitagawa, Asymptotic stability of the fundamental solution method, J. Comput. Appl. Math., 38 (1991), pp. 263–269.
-  A. Klöckner, A. Barnett, L. Greengard, and M. O’Neil, Quadrature by expansion: A new method for the evaluation of layer potentials, J. Comp. Phys., 252 (2013), pp. 332–349.
-  V. D. Kupradze, Potential Methods in the Theory of Elasticity, Israel Program for Scientific Translations, Jerusalem, 1965.
-  V. D. Kupradze and M. A. Aleksidze, The method of functional equations for the approximate solution of certain boundary value problems, 1964.
-  R. S. Lehman, Developments at an analytic corner of solutions of elliptic partial differential equations, J. Math. Mech., 8 (1959), pp. 727–760.
-  H. Lewy, On the reflection laws of second order differential equations in two dependent variables, Bull. Amer. Math. Soc., 65 (1959), pp. 37–58.
-  E. Levin and E. B. Saff, Potential theoretic tools in polynomial and rational approximation, in Harmonic Analysis and Rational Approximation, Springer, 2006, pp. 71–94.
-  Y. Liu, The Numerical Solution of Frequency-Domain Acoustic and Electromagnetic Periodic Scattering Problems, PhD thesis, Dartmouth College, 2016.
-  A. Logg, K.-A. Mardal, and G. Wells, eds., Automated Solution of Differential Equations by the Finite Element Method: the FEniCS Book, Springer, 2012; see also https:// fenicsproject.org
C. B. Moler,
Accurate bounds for the eigenvalues of the Laplacian and applications to rhombical domains,Tech. Rep. CS 121, Dept. Comp. Sci., Stanford U., 1969, http://i.stanford .edu/TR/CS-TR-69-121.html.
-  Y. Nakatsukasa, O. Sète, and L. N. Trefethen, The AAA algorithm for rational approximation, SIAM J. Sci. Comp., 40 (2018), pp. A1494–A1522.
-  D. J. Newman, Rational approximation to , Mich. Math. J., 11 (1964), pp. 11–14.
-  R. Ojala, A robust and accurate solver of Laplace’s equation with general boundary conditions on general domains in the plane, J. Comp. Math., 30 (2012), pp. 433–448.
-  F. Rathgeber, et al., Firedrake: automating the finite element method by composing abstractions, ACM Trans., Math. Softw., 43 (2016), pp. 24.1–24.27; see also https://www.firedrakeproject.org.
-  V. Rokhlin, Rapid solution of integral equations of classical potential theory, J. Comp. Phys., 60 (1983), pp. 187–207.
-  C. Runge, Zur Theorie der eindeutigen analytischen Functionen, Acta Math., 6 (1885), pp. 229–244.
-  E. B. Saff and V. Totik, Logarithmic Potentials with External Fields, Springer, 1997.
-  Ch. Schwab, - and -Finite Element Methods, Clarendon Press, 1999.
-  K. Serkh and V. Rokhlin, On the solution of elliptic partial differential equations on regions with corners, J. Comput. Phys., 305 (2016), pp. 150–171.
-  H. Stahl, Best uniform rational approximation of on , Acta Math., 190 (2003), pp. 241–306.
-  F. Stenger, Explicit nearly optimal linear rational approximation with preassigned poles, Math. Comp., 47 (1986), pp. 225–252.
-  H. Takahasi and M. Mori, Quadrature formulas obtained by variable transformation, Numer. Math., 21 (1973), pp. 206–219.
-  L. N. Trefethen, Six myths of polynomial approximation and quadrature, Maths. Today, 2011.
-  L. N. Trefethen, Approximation Theory and Approximation Practice, SIAM, 2013.
-  L. N. Trefethen, Series solution of Laplace problems, ANZIAM J., 60 (2018), pp. 1–26.
-  L. N. Trefethen, -digit Laplace solutions on polygons?, Posting on NA Digest at www.netlib .org/na-digest-html, 29 November 2018.
-  L. N. Trefethen and J. A. C. Weideman, The exponentially convergent trapezoidal rule, SIAM Rev., 56 (2014), pp. 385–458.
-  W. Tucker, Validated Numerics: A Short Introduction to Rigorous Computations, Princeton, 2011.
-  J. L. Walsh, The approximation of harmonic functions by harmonic polynomials and by harmonic rational functions, Bull. Amer. Math. Soc., 35 (1929), pp. 499–544.
-  J. L. Walsh, On interpolation and approximation by rational functions with preassigned poles, Trans. Amer. Math. Soc., 34 (1932), pp. 22–74.
-  J. L. Walsh, Interpolation and Approximation by Rational Functions in the Complex Domain, Amer. Math. Soc., 1935.
-  W. Wasow, Asymptotic development of the solution of Dirichlet’s problem at analytic corners, Duke Math. J., 24 (1957), pp. 47–56.
-  XLiFE++ package, https://uma.ensta-paristech.fr/soft/XLiFE++/.