# A fast simple algorithm for computing the potential of charges on a line

We present a fast method for evaluating expressions of the form u_j = ∑_i = 1,i = j^n α_i/x_i - x_j, for j = 1,...,n, where α_i are real numbers, and x_i are points in a compact interval of R. This expression can be viewed as representing the electrostatic potential generated by charges on a line in R^3. While fast algorithms for computing the electrostatic potential of general distributions of charges in R^3 exist, in a number of situations in computational physics it is useful to have a simple and extremely fast method for evaluating the potential of charges on a line; we present such a method in this paper, and report numerical results for several examples.

## Authors

• 2 publications
• 5 publications
• 6 publications
• ### Computing the spectral decomposition of interval matrices and a study on interval matrix power

We present an algorithm for computing a spectral decomposition of an int...
12/11/2019 ∙ by David Hartman, et al. ∙ 0

• ### Rounding Error Analysis of Linear Recurrences Using Generating Series

We develop a toolbox for the error analysis of linear recurrences with c...
11/02/2020 ∙ by Marc Mezzarobba, et al. ∙ 0

• ### A fast algorithm for computing the Boys function

We present a new fast algorithm for computing the Boys function using no...
07/03/2021 ∙ by Gregory Beylkin, et al. ∙ 0

• ### Another Simple but Faster Method for 2D Line Clipping

The majority of methods for line clipping make a rather large number of ...
08/04/2019 ∙ by Dimitrios Matthes, et al. ∙ 0

• ### A Simple Algorithm for Computing the Zone of a Line in an Arrangement of Lines

Let L be a set of n lines in the plane. The zone Z(ℓ) of a line ℓ in the...
11/16/2021 ∙ by Haitao Wang, et al. ∙ 0

• ### Golden and Alternating, fast simple O(lg n) algorithms for Fibonacci

Two very fast and simple O(lg n) algorithms for individual Fibonacci num...
10/31/2010 ∙ by L. F. Johnson, et al. ∙ 0

• ### A fast and scalable bottom-left-fill algorithm to solve nesting problems using a semi-discrete representation

We present a fast algorithm to solve nesting problems based on a semi-di...
03/15/2021 ∙ by Sahar Chehrazad, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction and motivation

### 1.1. Introduction

In this paper, we describe a simple fast algorithm for evaluating expressions of the form

 (1) uj=n∑i=1,i≠jαixi−xj,forj=1,…,n,

where are real numbers, and are points in a compact interval of . This expression can be viewed as representing the electrostatic potential generated by charges on a line in . We remark that fast algorithms for computing the electrostatic potential generated by general distributions of charges in exist, see for example the Fast Multipole Method that was introduced by [6], and which has been extended by several authors including [7, 13]. However, in a number of situations in computational physics it is useful to have a simple and extremely fast method for evaluating the potential of charges on a line; we present such a method in this paper. Under mild assumptions the presented method involves operations and has a small constant. The method is based on writing the potential as

 1r=∫∞0e−rtdt.

We show that there exists a small set of quadrature nodes and weights such that for a large range of values of we have

 (2) 1r≈m∑j=1wje−rtj,

see Lemma 4.5, which is a quantitative version of (2). Numerically the nodes and weights can be computed using a procedure for constructing generalized Gaussian quadratures, see §5.2. An advantage of representing as a sum of exponentials is that the translation operator

 (3) 1r↦1r+r′

can be computed by taking an inner product of the weights

with a diagonal transformation of the vector

. Indeed, we have

 (4) 1r+r′≈m∑j=1αje−(r+r′)tj=m∑j=1αje−r′tje−rtj.

The algorithm described in §3 leverages the existence of this diagonal translation operator to efficiently evaluate (1). We remark that previous works have used the diagonal form (4) of the translation operator (3) to evaluate expressions of the form (1), see Dutt, Gu and Rokhlin [4], and Yarvin and Rokhlin [19]. The current paper improves upon these past works by taking advantage of robust generalized Gaussian quadrature codes [2] that were not previously available; in particular, the presented algorithm is both simpler and more practical for large numbers of points than past methods, see the discussion in §5.1.

### 1.2. Motivation

Expressions of the form (1) appear in a number of situations in computational physics. In particular, such expressions arise in connection with the Hilbert Transform

 Hf(x)=limε→01π∫|x−y|≥εf(y)y−xdy.

For example, the computation of the projection of a function onto the first functions in a family of orthogonal polynomials can be reduced to an expression of the form (1) by using the Christoffel–Darboux formula, which is related to the Hilbert transform; we detail the reduction of to an expression of the form (1) in the following.

Let be a family of monic polynomials that are orthogonal with respect to the weight on . Consider the projection operator

 Pmf(x):=∫bam∑k=0pk(x)pk(y)hkf(y)w(y)dy,

where . Let and be the point Gaussian quadrature nodes and weights associated with , and set

 (5) uj:=n∑i=1m∑k=0pk(xj)pk(xi)hkf(xi)w(xi),forj=1,…,n.

By construction the polynomial that interpolates the values

at the points will accurately approximate on when is sufficiently smooth, see for example §7.4.6 of Dahlquist and Björck [3]. Directly evaluating (5) would require operations. In contrast, the algorithm of this paper together with the Christoffel–Darboux Formula can be used to evaluate (5) in operations. The Christoffel-Darboux formula states that

 (6) m∑k=0pk(x)pk(y)hk=1hmpm+1(x)pm(y)−pm(x)pm+1(y)x−y,

see §18.2(v) of [14]. Using (6) to rewrite (5) yields

 (7) uj=1hm⎛⎝f(xj)+m∑i=1,i≠jpm+1(xj)pm(xi)−pm(xj)pm+1(xi)xj−xif(xi)w(xi)⎞⎠,

where we have used the fact that the diagonal term of the double summation is equal to . The summation in (7) can be rearranged into two expressions of the form (1), and thus the method of this paper can be used to compute a representation of in operations.

###### Remark 1.1.

Analogs of the Christoffel–Darboux formula hold for many other families of functions; for example, if is a Bessel function of the first kind, then we have

 ∞∑k=12(ν+k)Jν+k(w)Jv+k(z)=wzw−z(Jν+1(w)Jν(z)−Jν(w)Jν+1(z)),

see [18]. This formula can be used to write a projection operator related to Bessel functions in an analogous form to (7), and the algorithm of this paper can be similarly applied

###### Remark 1.2.

A simple modification of the algorithm presented in this paper can be used to evaluate more general expressions of the form

 vj=n∑i=1αixi−yj,forj=1,…,m,

where are source points, and are target points. For simplicity, this paper focuses on the case where the source and target points are the same, which is the case in the projection application described above.

## 2. Main result

### 2.1. Main result

Our principle analytical result is the following theorem, which provides precise accuracy and computational complexity guarantees for the algorithm presented in this paper, which is detailed in §3.

###### Theorem 2.1.

Let and be given. Set

 uj:=n∑i=1,i≠jαixi−xj,forj=1,…,n.

Given and , the algorithm described in §3 computes values such

 (8) ∣∣~uj−uj∣∣∑ni=1|αi|≤ε,forj=1,…,n

in operations, where

 (9) Nδ:=n∑j=1#{xi:|xj−xi|<δ(b−a)}.

The proof of Theorem 2.1 is given in §4. Under typical conditions, the presented algorithm involves operations. The following corollary describes a case of interest, where the points are Chebyshev nodes for a compact interval (we define Chebyshev nodes in §4.2).

###### Corollary 2.1.

Fix , and let the points be Chebyshev nodes on . If , then the algorithm of §3 involves operations.

The proof of Corollary 2.1 is given in §4.4. The following corollary states that a similar result holds for uniformly random points.

###### Corollary 2.2.

Fix , and suppose that are sampled uniformly at random from . If , then the algorithm of §3 involves

operations with high probability.

The proof of Corollary 2.2

is immediate from standard probabilistic estimates. The following remark describes an adversarial configuration of points.

###### Remark 2.1.

Fix , and let be a collection of points such that and are evenly spaced in and , respectively, that is

 xj=2−n(j−1n−1),andxn+j=1+2−n(j−nn−1),forj=1,…,n.

We claim that Theorem 2.1 cannot guarantee a complexity better than for this configuration of points. Indeed, if , then , and if , then . In either case

 nlog(δ−1)+Nδ=Ω(n2).

This complexity is indicative of the performance of the algorithm for this point configuration; the reason that the algorithm performs poorly is that structures exist at two different scales. If such a configuration were encountered in practice, it would be possible to modify the algorithm of §3 to also involve two different scales to achieve evaluation in operations.

## 3. Algorithm

### 3.1. High level summary

The algorithm involves passing over the points twice. First, we pass over the points in ascending order and compute

 (10) ~u+j≈j−1∑i=1αixi−xj,forj=1,…,n,

and second, we pass over the points in descending order and compute

 (11) ~u−j≈n∑i=j+1αixi−xj,forj=1,…,n.

Finally, we define for such that

 ~uj≈n∑i=1,i≠jαixi−xj,forj=1,…,n.

We call the computation of the forward pass of the algorithm, and the computation of the backward pass of the algorithm. The forward pass of the algorithm computes the potential generated by all points to the left of a given point, while the backward pass of the algorithm computes the potential generated by all points to the right of a given point. In §3.2 and §3.3 we give an informal and detailed description of the forward pass of the algorithm. The backward pass of the algorithm is identical except it considers the points in reverse order.

### 3.2. Informal description

In the following, we give an informal description of the forward pass of the algorithm that computes

 ~u+j≈j−1∑i=1αixi−xj,forj=1,…,n.

Assume that a small set of nodes and weights such that

 (12) 1r≈m∑i=1wie−rtiforr∈[δ(b−a),b−a],

where is given and fixed. The existence and computation of such nodes and weights is described in §4.4 and §5.2. We divide the sum defining into two parts:

 (13) ~u+j≈j0∑i=1αixi−xj+j−1∑i=j0+1αixi−xj,

where By definition, the points are all distance at least from . Therefore, by (12)

 ~u+j≈−j0∑i=1m∑k=1wkαie−(xj−xi)tk+j−1∑i=j0+1αixi−xj.

If we define

 (14) gk(j0)=j0∑i=1αie−(xj0−xi)tk,fork=1,…,m,

then it is straightforward to verify that

 (15) ~u+j≈−m∑k=1wkgk(j0)e−(xj−xj0)tk+j−1∑i=j0+1αixi−xj.

Observe that we can update to using the following formula

 (16) gk(j0+1)=αj0+e−(xj0+1−xj0)tkgk(j0),fork=1,…,m.

We can now summarize the algorithm for computing . For each , we compute by the following three steps:

1. Update as necessary

2. Use to evaluate the potential from such that

3. Directly evaluate the potential from such that

By (16), each update of requires operations, and we must update at most times, so we conclude that the total cost of the first step of the algorithm is operations. For each , the second and third step of the algorithm involve and operations, respectively, see (15). It follows that the total cost of the second and third step of the algorithm is operations, where is defined in (9). We conclude that can be computed in operations. In §4, we complete the proof of the computational complexity guarantees of Theorem 2.1 by showing that there exist nodes and weights that satisfy (12), where is the approximation error in (12).

### 3.3. Detailed description

In the following, we give a detailed description of the forward pass of the algorithm that computes . Suppose that and are given and fixed. We describe the algorithm under the assumption that we are given quadrature nodes and weights such that

 (17) ∣∣ ∣∣1r−m∑j=1wje−rtj∣∣ ∣∣≤εforr∈[δ(b−a),b−a].

The existence of such weights and nodes is established in §4.4, and the computation of such nodes and weights is discussed in §5.2. To simplify the description of the algorithm, we assume that is a placeholder node that does not generate a potential.

###### Algorithm 3.1.

Input: , . Output: .

1. and

2. main loop:

3. for

4. update and :

5. while

6. for i = 1,…,m

7. end for

8. end while

9. compute potential from such that

10. for

11. end for

12. compute potential from such that

13. for

14. end for

15. end for

###### Remark 3.1.

In some applications, it may be necessary to evaluate an expression of the form (1) for many different weights associated with a fixed set of points . For example, in the projection application described in §1.2 the weights correspond to a function that is being projected, while the points are a fixed set of quadrature nodes. In such situations, pre-computing the exponentials used in the Algorithm 3.1 will significantly improve the runtime, see §5.1.

## 4. Proof of Main Result

### 4.1. Organization

In this section we complete the proof of Theorem 2.1; the section is organized as follows. In §4.2 we give mathematical preliminaries. In §4.3 we state and prove two technical lemmas. In §4.4 we prove Lemma 4.5, which together with the analysis in §3 establishes Theorem 2.1. In §4.5 we prove Corollary 2.1, and Corollary 2.2.

### 4.2. Preliminaries

Let and be fixed, and suppose that , and are given. The interpolating polynomial of the function at is the unique polynomial of degree at most such that

 P(xj)=f(xj),forj=1,…,n.

This interpolating polynomial can be explicitly defined by

 (18) P(x)=n∑j=1f(xj)qj(x),

where is the nodal polynomial for , that is,

 (19) qj(x)=n∏k=1,k≠jx−xkxj−xk.

We say are Chebyshev nodes for the interval if

 (20) xj=b+a2+b−a2cos⎛⎝πj−12n⎞⎠,forj=1,…,n.

The following lemma is a classical result in approximation theory. It says that a smooth function on a compact interval is accurately approximated by the interpolating polynomial of the function at Chebyshev nodes, see for example §4.5.2 of Dahlquist and Björck [3].

###### Lemma 4.1.

Let , and be Chebyshev nodes for . If is the interpolating polynomial for at , then

 supx∈[a,b]|f(x)−P(x)|≤2Mn!(b−a4)n,

where

 M=supx∈[a,b]|f(n)(x)|.

In addition to Lemma 4.1, we require a result about the existence of generalized Gaussian quadratures for Chebyshev systems. In 1866, Gauss [5] established the existence of quadrature nodes and weights for an interval such that

 ∫baf(x)dx=n∑j=1wjf(xj),

whenever is a polynomial of degree at most . This result was generalized from polynomials to Chebyshev systems by Kreĭn [10]. A collection of functions on is a Chebyshev system if every nonzero generalized polynomial

 g(t)=a0f0(t)+⋯+anfn(t),fora0,…,an∈R,

has at most distinct zeros in . The following result of Kreĭn says that any function in the span of a Chebyshev system of functions can be integrated exactly by a quadrature with nodes and weights.

###### Lemma 4.2 (Kreĭn [10]).

Let be a Chebyshev system of continuous functions on , and be a continuous positive weight function. Then, there exists unique nodes and weights such that

 ∫baf(x)w(x)dx=n∑j=1wjf(xj),

whenever is in the span of .

### 4.3. Technical Lemmas

In this section, we state and prove two technical lemmas that are involved in the proof of Theorem 2.1. We remark that a similar version of Lemma 4.3 appears in [15].

###### Lemma 4.3.

Fix and , and let be Chebyshev nodes for . If is the interpolating polynomial for at , then

 supr∈[a,2a]∣∣e−rt−Pt(r)∣∣≤14n.
###### Proof.

We have

 supr∈[a,2a]∣∣∣∂n∂rne−rt∣∣∣=supr∈[a,2a]|tne−rt|=tne−ta.

By writing the derivative of as

 ddttne−ta=(na−t)atn−1e−at,

we can deduce that the maximum of occurs at , that is,

 (21) supt∈[0,∞)tne−ta=(na)ne−a(n/a).

By (21) and the result of Lemma 4.1, we conclude that

 supt∈[a,2a]|e−rt−Pt(r)|≤2(n/a)ne−a(n/a)n!(a4)n=2nne−nn!14n.

It remains to show that . Since is a increasing function, we have

 nlnn−n+1=∫n1ln(x)dx≤∫n1n−1∑j=1χ[j,j+1](x)ln(j+1)dx=n∑j=1ln(j).

Exponentiating both sides of this inequality gives , which is a classical inequality related to Stirling’s approximation. This completes the proof. ∎

###### Lemma 4.4.

Suppose that and are given. Then, there exists

 m=O(log(M)log(ε−1))

values such that for all we have

 (22) supt∈[0,∞)∣∣ ∣∣e−rt−m∑j=1cj(r)e−rjt∣∣ ∣∣≤ε,

for some choice of coefficients that depend on .

###### Proof.

We construct an explicit set of points and coefficients such that (22) holds. Set . We define the points by

 (23) rin+k:=2i−1⎛⎝3+cos⎛⎝πk−12n⎞⎠⎞⎠,

for and , and define the coefficients by

 (24) cin+k(r):=χ[2i,2i+1)(r)⌊log4ε−1⌋∏l=1,l≠kr−rin+lrin+l−rin+k,

for and . We claim that

 supr∈[1,M]supt∈[0,∞)∣∣ ∣∣e−rt−m∑j=1cj(r)e−rjt∣∣ ∣∣≤ε.

Indeed, fix , and let be the unique integer such that . By the definition of the coefficients, see (24), we have

 m∑j=1cj(r)e−rjt=n∑k=1e−ri0n+kt⌊log4ε−1⌋∏l=1,l≠kr−ri0n+lri0n+l−ri0n+k.

We claim that the right hand side of this equation is the interpolating polynomial for at , that is,

 n∑k=1e−ri0n+kt⌊log4ε−1⌋∏l=1,l≠kr−ri0n+lri0n+l−ri0n+k=Pt,i0(r).

Indeed, see (18) and (19). Since the points are Chebyshev nodes for the interval , and since was chosen such that , it follows from Lemma 4.3 that

 ∣∣e−rt−Pt,i0(r)∣∣≤14nfort∈[0,∞).

Since the proof is complete. ∎

###### Remark 4.1.

The proof of Lemma 4.4 has the additional consequence that the coefficients in (22) can be chosen such that they satisfy

 |cj(r)|≤√2forj=1,…,m.

Indeed, in (24) the coefficients are either equal zero or equal to the nodal polynomial, see (19), for Chebyshev nodes on an interval that contains . The nodal polynomials for Chebyshev nodes on an interval are bounded by on , see for example [15]. The fact that can be approximated as a linear combination of functions with small coefficients means that the approximation of Lemma 4.4 can be used in finite precision environments without any unexpected catastrophic cancellation.

### 4.4. Completing the proof of Theorem 2.1

Previously in §3.2, we proved that the algorithm of §3 involves operations. To complete the proof of Theorem 2.1 it remains to show that there exists

 m=O(log(ε−1)log(δ−1))

points and weights that satisfy (17); we show the existence of such nodes and weights in the following lemma, and thus complete the proof of Theorem 2.1. The computation of such nodes and weights is described in §5.2.

###### Lemma 4.5.

Fix , and let and be given. Then, there exists nodes and weights such that

 (25) ∣∣ ∣∣1r−m∑j=1wje−rtj∣∣ ∣∣≤ε,forr∈[δ(b−a),b−a].
###### Proof.

Fix , and let be given. By the possibility of rescaling , , and , we may assume that such that we want to establish (25) for . By Lemma 4.4 we can choose points , and coefficients depending on such that

 (26) supr∈[1,δ−1]supt∈[0,∞)∣∣ ∣∣e−rt−2m−1∑j=0cj(r)e−rjt∣∣ ∣∣≤ε2log(2ε−1).

The collection of functions form a Chebyshev system of continuous functions on the interval , see for example [9]. Thus, by Lemma 4.2 there exists quadrature nodes and weights such that

 ∫log(2ε−1)0f(t)dt=m∑j=1wjf(tj),

whenever is in the span of . By the triangle inequality

 (27) ∣∣ ∣∣1r−m∑j=1wje−rtj∣∣ ∣∣≤∣∣ ∣∣1r−∫log(2ε−1)0e−rtdt∣∣ ∣∣+∣∣ ∣∣∫log(2ε−1)0e−rtdt−m∑j=1wjertj∣∣ ∣∣.

Recall that we have assumed , in particular, so it follows that

 (28) ∣∣ ∣∣1r−∫log(2ε−1)0e−rtdt∣∣ ∣∣≤ε/2.

By (26), the function can be approximated to error in the -norm on by functions in the span of . Since our quadrature is exact for these functions, we conclude that

 (29) ∣∣ ∣∣∫log(2ε−1)0e−rtdt−m∑j=1wjertj∣∣ ∣∣≤ε/2.

Combining (27), (28), and (29) completes the proof. ∎

### 4.5. Proof of Corollary 2.1

In this section, we prove Corollary 2.1, which states that the algorithm of §3 involves operations when are Chebyshev nodes, , and .

###### Proof of Corollary 2.1.

By rescaling the problem we may assume that such that the Chebyshev nodes are given by

 xj=cos⎛⎝πj−12n⎞⎠,forj=1,…,n.

By the result of Theorem 2.1, it suffices to show that , where

 Nδ:=n∑j=1#{xi:|xj−xi|<1n}.

It is straightforward to verify that the number of Chebyshev nodes within an interval of radius around the point is