DeepAI

# Fast expansion into harmonics on the disk: a steerable basis with fast radial convolutions

We present a fast and numerically accurate method for expanding digitized L × L images representing functions on [-1,1]^2 supported on the disk {x ∈ℝ^2 : |x|<1} in the harmonics (Dirichlet Laplacian eigenfunctions) on the disk. Our method runs in 𝒪(L^2 log L) operations. This basis is also known as the Fourier-Bessel basis and it has several computational advantages: it is orthogonal, ordered by frequency, and steerable in the sense that images expanded in the basis can be rotated by applying a diagonal transform to the coefficients. Moreover, we show that convolution with radial functions can also be efficiently computed by applying a diagonal transform to the coefficients.

• 9 publications
• 9 publications
• 48 publications
07/06/2021

### Nonuniform fast Fourier transforms with nonequispaced spatial and frequency data and fast sinc transforms

In this paper we study the nonuniform fast Fourier transform with nonequ...
07/24/2021

### Accelerating Atmospheric Turbulence Simulation via Learned Phase-to-Space Transform

Fast and accurate simulation of imaging through atmospheric turbulence i...
12/19/2021

### Quantum radial basis function methods for scattered data fitting

Scattered data fitting is a frequently encountered problem for reconstru...
05/17/2020

### Comparison between compactly-supported spherical radial basis functions and interpolating moving least squares meshless interpolants for gravity data interpolation in geodesy a

The present paper is focused on the comparison of the efficiency of two ...
08/09/2016

### Steerable Principal Components for Space-Frequency Localized Images

This paper describes a fast and accurate method for obtaining steerable ...
11/13/2019

### Fast Computation of Orthogonal Systems with a Skew-symmetric Differentiation Matrix

Orthogonal systems in L_2(R), once implemented in spectral methods, enjo...
11/16/2022

### Orthogonal Polynomials Quadrature Algorithm (OPQA): A Functional Analytical Approach to Bayesian Inference

In this paper, we present the new Orthogonal Polynomials-Quadrature Algo...

## 1. Introduction

### 1.1. Motivation

Decomposing a function into its Fourier series can be viewed as representing a function in the eigenfunctions of the Laplacian on the torus where and are identified. Indeed,

 −Δeıkx=k2eıkx.

The eigenfunctions of the Laplacian (harmonics) on the disk that satisfy the Dirichlet boundary conditions can be written in polar coordinates as

 ψnk(r,θ)=cnkJn(λnkr)eınθ, (1)

where is a normalization constant, is the -th order Bessel function of the first kind, and is the -th smallest positive root of . The indices run over . The functions satisfy

 −Δψnk=λ2nkψnk. (2)

In this paper, we present a fast and accurate transform of digitized images into this eigenfunction basis, often referred to as the Fourier-Bessel basis. For computational purposes, this basis is convenient for a number of reasons:

1. [label=()]

2. Orthonormal: these eigenfunctions are an orthonormal basis for square integrable functions on the disk.

3. Ordered by frequency:

the basis functions are ordered by eigenvalues, which can be interpreted as frequencies due to the connection with the Laplacian and Fourier series described above; therefore, low-pass filtering can be performed by retaining basis coefficients up to a given threshold.

4. Steerable: functions expanded in the basis can be rotated by applying a diagonal transform corresponding to phase modulation of the coefficients.

5. Fast radial convolutions: we show that the convolution with radial functions can be computed by applying a diagonal transform to the coefficients.

Our method involves operations and has precise accuracy guarantees. Python code that implements our method is publicly available online111An implementation is available at https://github.com/nmarshallf/fle_2d.. To the best of our knowledge, existing methods for computing the expansion coefficients in a steerable basis either require operations [19, 20] or lack accuracy guarantees [34, 35].

Steerable bases have been utilized in numerous image processing problems including image alignment [25], image classification [35] and image denoising [34]

, including applications to machine learning

[5, 7, 33] and data-driven science, such as applications to cryo-electron microscopy (cryo-EM) [6, 23]

, and computer vision

[24], among other areas.

There are many possible choices of steerable bases, for instance prolate spheroidal wave functions which are bandlimited functions optimally supported on the disk [19, 20, 28], or Zernike polynomials which are widely used in optics [31]. The harmonics on the disk (which satisfy Dirichlet boundary conditions) [34, 35] are one natural choice due to their orthogonality, ordering by frequency, and fast radial convolution.

We illustrate the frequency ordering property of the Laplacian eigenbasis by performing a low-pass filter by projecting onto the span of eigenfunctions whose eigenvalues are below a sequence of bandlimits that decrease the number of basis functions successively by factors of four, starting from coefficients. Since the basis is orthonormal this is equivalent to setting coefficients above the bandlimit equal to zero; see Fig. 1 (a–d). Further, we demonstrate the radial convolution property by illustrating the convolution with a point spread function, which is a function used in computational microscopy [32]; see Fig. 1 (e–f). The image used for this example is a tomographic projection of a 3-D density map representing a bio-molecule (E. coli 70S ribosome) [26].

### 1.2. Notation

Suppose that is an image representing a function that is supported on the unit disk . More precisely, we assume that

 f(j1,j2)=~f(xj1,j2),wherexj1,j2:=1⌊L/2⌋(j1−⌊L/2⌋,j2−⌊L/2⌋),

for . Let be the number of pixels in the image , and let

 x1,…,xpbe an enumeration of{xj1,j2:j1,j2=1,…,L}.

We can consider the image

as a vector

whose -th entry is defined by

 fj=~f(xj).

Similarly, for a fixed bandlimit parameter , let

 ψ1,…,ψmbe an enumeration of{ψnk:λnk≤λ},

where is defined in (1) above. With this notation, coefficients of a function in the basis of eigenfunctions up to bandlimit is a vector in . By Weyl’s law, see for example [17], we have .

In the following, we consider a linear transformation

which maps coefficients in to images in , and its adjoint which maps images in to coefficients in .

### 1.3. Main result

We consider the linear transform which maps coefficients to images by

 (Ba)j=m∑i=1aiψi(xj)h, (3)

and its adjoint transform which maps images to coefficients by

 (B∗f)i=p∑j=1fj¯¯¯¯¯¯¯¯¯¯¯¯¯¯ψi(xj)h, (4)

where is a normalization constant, chosen such that is close to the identity. In this paper, we present a fast and accurate method to apply and . In particular, the presented method can apply the operators and to vectors in operations for any fixed relative error . We again emphasize that this is in contrast to previous results, since to the best of our knowledge, existing methods for computing the expansion coefficients in a steerable basis either require operations [19, 20] or lack accuracy guarantees [34, 35]. The application of the operators and can be used in an iterative method to determine least squares optimal expansion coefficients for a given image. Alternatively, applying to

can be viewed as estimating the continuous inner products that define the coefficients by using quadrature points on a grid (and potentially quadrature weights to provide an endpoint correction).

We report the relative errors of our fast method for applying and compared to matrix multiplication for images with up to pixels with varying values of , and report timings for images with up to pixels. Moreover, we present a numerical example involving rotations, radial convolutions, and deconvolutions.

### 1.4. Organization

The remainder of the paper is organized as follows. In §2 we describe the analytical apparatus underlying the method. In §3 we describe the computational method. In §4 we justify why the discretization of the analytical apparatus performed by the computational method achieves the specified precision. In §5 we present numerical results. In §6 we discuss implications of the method and potential extensions.

## 2. Analytical apparatus

### 2.1. Notation

The eigenfunctions of the Laplacian on the unit disk (that satisfy Dirichlet boundary conditions) defined in (1) can be extended to as functions supported on the unit disk by

 ψnk(r,θ)=cnkJn(λnkr)eınθχ[0,1)(r), (5)

for , where denotes an indicator function for . For the sake of completeness, we note that the normalization constants which ensure that are defined by

 c0k=1π1/2J1(λ0k), and cnk=21/2π1/2Jn+1(λnk), for n≠0. (6)

We use the convention that the Fourier transform

of an integrable function is defined by

 ˆf(ξ)=12π∫R2f(x)e−ıx⋅ξdx, (7)

where denotes the Euclidean inner product. We define the convolution of two functions by

 (f∗g)(x)=∫R2f(x−y)g(y)dy.

Furthermore, we will make use of the identity

 Jn(r)=12π∫2π0eırsinθe−ınθdθ, (8)

see for example [29, Eq. 9.19]. To verify this identity, recall that is the solution to Bessel’s equation

 r2∂rry+r∂ry+(r2−n2)y=0, (9)

see [9, Eq. 10.2.1]. Observe that

 (r2∂rr+r∂r+r2)eırsinθ=−∂θθeırsinθ.

Since taking the second derivative of a function multiplies its -th Fourier coefficient by , it follows that

 (r∂rr+r∂r+(r2−n2))12π∫2π0eırsinθe−ınθdθ=0, (10)

which establishes that (8) is a solution to (9). It is straightforward to verify that (8) also satisfies the same scaling relations, see [9, Eq. 10.2.2], as when .

### 2.2. Fourier transform of eigenfunctions

The analytic foundation for the presented fast method is the following expression for the Fourier transform of the functions defined in (5).

###### Lemma 2.1.

The Fourier transform can be expressed by

 ˆψnk(ξ)=(−ı)neınϕ∫10cnkJn(λnkr)Jn(ρr)rdr, (11)

where are polar coordinates for .

We prove this lemma for completeness.

###### Proof of Lemma 2.1.

By the definition of the Fourier transform (7) we have

 ˆψnk(ξ)=12π∫R2ψnk(x)e−ıx⋅ξdx.

Changing to polar coordinates and gives

 ˆψnk(ξ)=12π∫2π0∫10cnkJn(λnkr)eınθe−ırρcos(θ−ϕ)rdrdθ,

where we used the fact that . Changing variables and taking the integral over gives

 ˆψnk(ξ)=(−ı)neınϕ∫10cnkJn(λnkr)Jn(ρr)rdr,

as desired. ∎

### 2.3. Coefficients from eigenfunction Fourier transform

Next, we observe how the coefficients of a function in the eigenfunction basis can be computed by an application of Lemma 2.1. In the following, we will write the arguments of Fourier transforms of functions in polar coordinates . We have the following result:

###### Lemma 2.2.

Suppose that is a finite index set, and set

 f=∑(n,k)∈Iαnkψnk, (12)

where are coefficients. Define by

 βn(ρ):=ın∫2π0ˆf(ρ,ϕ)e−ınϕdϕ. (13)

It then holds that

 αnk=cnkβn(λnk). (14)

The proof is a direct consequence of Lemma 2.1.

###### Proof of Lemma 2.2.

Observe that (11) implies

 ın∫2π0ˆψn′k′(ρ,ϕ)e−ınϕdϕ=2πδ(n−n′)∫10cn′k′Jn(λn′k′r)Jn(ρr)rdr, (15)

where if and otherwise. Evaluating (15) at radius gives

 ın∫2π0ˆψn′k′(λnk,ϕ)e−ınϕdϕ=2πδ(n−n′)∫10cn′k′Jn(λn′k′r)Jn(λnkr)rdr=2πδ(n−n′)∫10cnk′Jn(λnk′r)Jn(λnkr)rdr=1cnkδ(n−n′)δ(k−k′),

where the final equality follows from the orthogonality of the eigenfunctions , (which is a consequence of the fact that the Laplacian is self-adjoint). By the definition of in (13), this implies that

 βn(λnk)=∑(n′,k′)∈Iαn′k′ın∫2π0ˆψn′k′(ρ,ϕ)e−ınϕdϕ=∑(n′,k′)∈Iαn′k′cn′k′δ(n−n′)δ(k−k′)=αnkcnk,

which concludes the proof. ∎

###### Remark 2.1 (Special property of Bessel functions).

We emphasize that the integral expression (8) of the Bessel function is crucial for the fast method of this paper. The possibility of extending the approach to create other fast transforms defined on domains in therefore hinges on identifying equally useful integral expressions for the corresponding transforms.

### 2.4. Convolution with radial functions

Let be a radial function. In this section, we observe how the convolution with can be computed via a diagonal transform of the coefficients. More precisely, we compute the projection of the convolution with onto the span of any finite basis of the eigenfunctions .

###### Lemma 2.3.

Let be a function with coefficients as in (12), and be a radial function. We have

 PI(f∗g)=∑(n,k)∈Iαnkˆg(λnk)ψnk,

where denotes the orthogonal projection onto the span of .

The proof is a direct application of Lemma 2.2.

###### Proof of Lemma 2.3.

We use the notation and . Since the functions are an orthonormal basis, in order to compute the orthogonal projection , it suffices to determine the coefficients of with respect to for . Since , and is radial, we have

 ın∫2π0ˆ(f∗g)(λnk,ϕ)e−ınϕdϕ=ın∫2π0ˆf(λnk,ϕ)ˆg(λnk)e−ınϕdϕ=αnkcnkˆg(λnk),

where the final equality follows from (14). An application of Lemma 2.2 then completes the proof. ∎

## 3. Computational method

### 3.1. Notation

Recall that is the number of pixels in the given image, and is the number of functions in our expansion. Throughout, we write all computational complexities in terms of with the understanding that . This assumption is not restrictive since it does not make sense to expand an image using more functions than the number of pixels.

###### Remark 3.1 (Precision of calculations).

The computational complexities also have a (logarithmic) dependence on the reciprocal of the desired precision ; for simplicity, we treat as a fixed constant, say, , and do not include it in complexity statements.

###### Remark 3.2 (Continuous to discrete).

Throughout this section, for clarity of exposition, we use the same notation used in the continuous setting described in §2 to represent the corresponding discrete objects. We justify why the discrete sums of the algorithm will approximate the continuous integrals to the desired precision in §4.

### 3.2. Overview

In the following, we describe how to apply the operators and defined above in §1.3 in operations. For the purpose of exposition, we start by describing a simplified method before presenting the full method. The section is organized as follows:

• In §3.3 we describe a simplified method to apply and in operations. The simplified method is a direct application of the lemmas from the previous section.

• In §3.4 we provide an informal description of how to modify the simplified method to create a fast method to apply and in

operations. The main additional ingredient is fast interpolation from Chebyshev nodes.

• In §3.5 we give a detailed description of the fast method to apply and in operations.

### 3.3. Simplified method

In this section, for the purpose of exposition, we present a simplified method that applies and in operations. Recall that are the pixel values of the image, are the pixel coordinates, and are the eigenfunctions below the given bandlimit, see §1.2. Let denote the Bessel function roots (square root of the eigenvalues) corresponding to . Further, suppose that are the indices such that and .

Fix a positive integer (see §4.3 for a justification of this parameter choice). We first describe how to apply which is defined by

 (B∗f)i=p∑j=1fj¯¯¯¯¯¯¯¯¯¯¯¯¯¯ψi(xj)h,

where is a normalization parameter. The simplified method has three steps, which make use of the non-uniform fast Fourier transform (NUFFT) [11, 14, 22].

1. [label=Step 0., parsep=2ex,wide=0pt]

2. Using the type-2 2-D NUFFT, in operations, compute:

 ^fiℓ:=p∑j=1fje−ıxj⋅ξiℓwhereξiℓ:=λi(cos2πℓ/s,sin2πℓ/s),

for to precision .

3. Using the FFT, in operations, compute:

 βn(λi):=s∑ℓ=1^fiℓe−ın2πℓ/s,

for .

4. By Lemma 2.2 it follows that

 (B∗f)i=βni(λi)cih,

where is the integer such that and .

In the following remarks, we discuss how reversing these steps provides an immediate way to apply and discuss limitations of the simplified method.

###### Remark 3.3 (Applying B).

Note that each step of the algorithm consists of applying a linear transform whose adjoint can be applied in a similar number of operations.

Indeed, the adjoint of Step 1 (which uses type-2 NUFFT) is a type-1 2-D NUFFT [1], and the adjoint of Step 2 (which uses a standard FFT) is an inverse FFT. Step 3 (interpolation) can be computed in a variety of ways (including by using the NUFFT or FFT and sparse matrices, see Remark 3.5) which each have similarly fast adjoints [11, 14, 22]. Therefore, in order to apply , which is defined by

 (Ba)j=m∑i=1aiψi(xj)h,

we can just apply the adjoint of each step in the reverse order.

###### Remark 3.4 (Limitations of simplified method).

The problem with the simplified method is that the first step is prohibitively expensive since we want a method that works in operations. The key to overcome this limitation is to observe that are samples from analytic functions, and to add the additional ingredient of fast interpolation from Chebyshev nodes.

### 3.4. Summary of fast method

In this section, we describe how the method of the previous section can be improved from to by using fast interpolation from Chebyshev nodes. Recall that Lemma 2.2 defines

 βn(ρ):=ın∫2π0ˆf(ρ,ϕ)e−ınϕdϕ,

and states that the coefficients can be computed by evaluating the appropriate at . The problem with the simplified method is the first step: computing for and a number of points so that we are sampling at the Nyquist rate is a set of points, which is already prohibitive. Fortunately, there is a simple solution to this issue: for each , the functions are analytic functions of which can be interpolated to determine the coefficients, if they are tabulated at appropriate points; see §4.2 for a discussion of why points are sufficient. Here is an informal summary of the fast method:

• Compute the Fourier transform of at

 ξiℓ:=ti(cos2πϕℓ,sin2πϕℓ),

for Chebyshev nodes and angles .

• Compute for the Chebyshev nodes and frequencies .

• For each of the frequencies, use fast interpolation from the Chebyshev nodes to the Bessel function roots associated with each frequency . We illustrate the interpolation step in Fig. 2.

### 3.5. Detailed description of fast method

Recall that are the eigenfunctions below our bandlimit, and are the associated Bessel function roots (which are the square root of the associated eigenvalues). Assume that are listed in ascending order so that they are contained in the interval . Fix , where is the desired relative error, see §4.2 for a justification of this parameter choice. Recall that Chebyshev nodes of the first kind for the interval are defined by

 ti=(λm−λ1)1−cos(π(i−1)/q)2+λ1,

for . The fast method follows the same strategy as the simplified method. The key difference is that the fast method replaces the final step with fast interpolation, and therefore evaluates the Fourier coefficients on a smaller number of nodes (the Chebyshev nodes).

1. [label=Step 0., parsep=2ex, wide=0pt]

2. Using the NUFFT, in operations, compute:

 ^fiℓ:=p∑j=1fje−ıxj⋅ξiℓwhereξiℓ:=ti(cos2πℓ/s,sin2πℓ/s),

for .

3. Using the FFT, in operations, compute:

 βn(ti):=s∑ℓ=1^fiℓe−ın2πℓ/s,

for .

4. Using fast interpolation from Chebyshev nodes, in operations, compute:

 (B∗f)i=βni(λi),

for where is the integer such that .

As detailed in Remark 3.3, the fact that each step is a linear operator whose adjoint can be applied in a similar number of operations provides a similar fast method to apply .

###### Remark 3.5 (Methods for fast interpolation from Chebyshev nodes).

Theoretically, the most straightforward way to perform fast interpolation from Chebyshev nodes is to consider the function values at the Chebyshev nodes as function values associated with equally spaced points on a half circle (by a change of variables). Then the fast interpolation (called spectral interpolation) can be computed by using the NUFFT in operations or from arbitrary nodes using the Fast Multipole Method, see [10]. Practically speaking, since the NUFFT run-time constant is high, choosing a fixed number of source points centered around the target points (say source points) and then applying a precomputed sparse (barycentric interpolation [3]) matrix may be more practical; sparse interpolation can be used in combination with standard spectral interpolation (using the discrete cosine transform) to double or quadruple the number of Chebyshev nodes before the sparse interpolation step.

## 4. Continuous to discrete

We conclude the description of the fast algorithm by discussing additional analytic details that justify transitioning from the continuous identities in §2 to the discrete implementations of §3.

### 4.1. Maximum bandlimit

In this section, we derive a bound for the bandlimit parameter, which controls the number of basis functions used in the expansion. The number of basis functions should not exceed the number of pixels of the image corresponding to points within the unit disk. In combination with Weyl’s law, this leads to a bound on the maximum bandlimit.

In more detail, from Weyl’s law, the number of Dirichlet eigenvalues of the Laplacian in the disk is bounded by is

 (16)

see [8]. Similarly, the inscribed circle in an image has radius . The number of pixels contained inside this circle is therefore

 #{(i,j)∈Z×Z:i2+j2≤⌊L−12⌋2}=π⌊L−12⌋2+O(L2/3), (17)

see for example [13, 16, 30]. Equating (16) with (17) results in

 λ=2√π⌊L−12⌋+1+O(L−1/3). (18)

Practically speaking, it can be advantageous to expand the image in fewer basis functions than described by (18), but, in any case, is a natural choice and different motivations exist in the literature [34].

###### Remark 4.1 ( FFT Bandlimit heuristic).

One heuristic for setting the bandlimit is based on the fast Fourier transform (FFT). For a centered FFT on a signal of length

, the maximum frequency is which corresponds to a bandlimit of . Note that

 πL/2≈1.57L<1.77L≈√π⌊(L−1)/2⌋/2, (19)

so this FFT bandlimit heuristic does indeed produce a reasonable bandlimit below the bound (18) derived from Weyl’s law. We use this bandlimit for our numerical experiments. The computational complexity and accuracy guarantees of the method presented in this paper hold for any bandlimit . However, the fact that the fast method performs interpolation in Fourier space inside a disk bounded by the maximum bandlimit provides additional motivation for this FFT-based heuristic since it will ensure that the disk will be contained within the square in frequency space used by the two-dimensional FFT.

### 4.2. Number of radial nodes

The following lemma shows that Chebyshev nodes are sufficient for accurate interpolation.

###### Lemma 4.1.

When running the algorithm on an image with pixels at the maximum bandlimit, choosing the number of Chebyshev nodes to satisfy

 q=⌊max{2.4√p,log2ε−1}⌋, (20)

achieves relative error less than in the interpolation step of §3.5.

It will be clear from the proof that the constant in the statement of the lemma is an overestimate, see Remark 4.2 for a discussion of how this constant can be improved.

###### Proof of Lemma 4.1.

When interpolating a smooth differentiable function defined on the interval using an interpolating polynomial at Chebyshev nodes, the residual term can be written as

 |R(x)|≤2Cqq!(b−a4)q,

where ; indeed, this can be deduced from [9, § 3.3]. If we apply this result with , the residual satisfies

 |R(x)|≤2Cqq!(λm−λ14)q≤2Cqq!(√πp4)q

where the final inequality follows from the bound ; see § 4.1. In order to apply this bound to , we estimate

 Cq:=maxρ∈[λ1,λm]|β(q)n(ρ)|,

where denotes the th derivative of defined in (13). Since is compactly supported, its Fourier transform is analytic and the are therefore all finite. More precisely, the term can be bounded uniformly by using

 |β(q)n(ρ)|=∣∣ ∣∣dqdρq(12π∫2π0ˆf(ρ,ϕ)e−inϕdϕ)∣∣ ∣∣≤(12π∫2π0∣∣∣dq% dρqˆf(ρ,ϕ)∣∣∣dϕ).

We write out

 ∣∣∣dqdρqˆf(ρ,ϕ)∣∣∣=∣∣∣12πdqdρq∫2π0∫∞0f(r,θ)e−ırρcos(θ−ϕ)rdrdθ∣∣∣=∣∣∣12π∫2π0∫∞0(−ırcos(θ−ϕ))qf(r,θ)e−ırρcos(θ−ϕ)rdrdθ∣∣∣=∣∣∣12π∫2π0∫10(−ırcos(θ−ϕ))qf(r,θ)e−ırρcos(θ−ϕ)rdrdθ∣∣∣≤12π∫2π0∫10|f(r,θ)|rdrdθ=12π∥f∥L1≤12√π∥f∥L2,

where the third equality uses the fact that is supported on the disk of radius and the last inequality uses Cauchy-Schwarz applied to multiplied by the indicator function of the unit disk. It follows that

Therefore, in order to achieve relative error , it suffices to set such that

 ε≤1√πq!(√πp4)q. (21)

It follows from Stirling’s approximation [9, 5.11.3] that

 |R(x)|∥f∥L2≤(√πpe4q)q (22)

Setting and solving for gives

 q=√πe√p2≈2.4√p⟹q≥max{2.4√p,log2ε−1),

is sufficient to achieve relative error less than . ∎

### 4.3. Number of angular nodes

The following lemma shows that angular nodes is sufficient to achieve relative error .

###### Lemma 4.2.

When running the algorithm on an image with pixels at the maximum bandwidth, choosing the number of equispaced angular nodes to satisfy

 s=⌊max{7.08√p,log2ε−1+log2p}⌋. (23)

achieves relative error less than in the FFT step of §3.5.

As above, we emphasize that the constant in the statement of this result is an overestimate. See 4.2 for discussion about how this constant can be improved.

###### Proof of Lemma 4.2.

We will require a classical result, see for example [18]: suppose that is a smooth periodic function on the torus where and are identified. Then

 ∣∣ ∣∣∫2π0f(θ)dθ−1nn−1∑j=0f(θj)∣∣ ∣∣<2∥f(n)∥2nn,

for all , where .

Indeed, we start by writing and use the fact that is equal to if is a multiple of and is otherwise to obtain:

 ∣∣ ∣∣∫2π0f(θ)dθ−1nn−1∑j=0f(θj)∣∣ ∣∣=∣∣ ∣∣a0−1nn−1∑j=0∞∑k=−∞akeıkθj∣∣ ∣∣=∣∣ ∣∣∑t≠0ant∣∣ ∣∣.

Multiplying and dividing terms by , and using Cauchy-Schwarz:

 ∣∣ ∣∣∑t≠0ant(ınt)n(ınt)n∣∣ ∣∣≤⎛⎝∑t≠0|(ınt)nant|2⎞⎠1/2(∑t>02(nt)2n)1/2≤(∞∑ℓ=−∞|(ıℓ)naℓ|2)1/2(∑t>02(nt)2n)1/2<2∥f(n)∥L2nn,

where the last inequality follows from Parseval’s identity and the identity , together with .

In order to determine the required number of angular nodes we write the image and its Fourier transform as

 f(x)=p∑j=1fjδxj(x),ˆf(ξ)=p∑j=1fj12πeiξ⋅xj.

By the triangle inequality, it suffices to estimate integrals of functions of the form

 g(ϕ)=eirρcos(θ−ϕ)−inϕ

to precision for all , , , and . We have the estimate

 |g(s)|≤(λm+nm)s.

Thus, in order to integrate these functions, we need to choose such that

 (λm+nms)s≤εp. (24)

It follows that choosing

 s=max{2(λm+nm),log2ε−1+log2p},

achieves relative error . To complete the proof we note that , where is the maximum bandlimit from § 4.1. Also by [9, 10.21.40] we have

 λn1=n+1.8575n1/3+O(n−1/3),

which implies that the maximum angular frequency . We conclude that

 s=max{4λ,logε−1+log2p},

is sufficient to achieve error ; using the bound from (19) completes the proof. ∎

###### Remark 4.2 (Improving estimates for number of radial and angular nodes).

While Lemmas 4.1 and 4.2 show that the number of radial nodes and angular nodes are , the constants in the lemmas are not optimal. For practical purposes, choosing the minimal number of nodes possible to achieve the desired error is advantageous to improve the run time constant of the algorithm, and it is clear from the proofs how the estimates can be refined. For Lemma 4.1 we set , and motivated by (21) compute

for and choose the smallest value of such that . Similarly, for Lemma 4.2, we set , and motivated by (24) compute

 εangular(s)=p(λm+nms)s,

for and choose the smallest value of such that . Then, it follows that and can be replaced by and , in the statements of Lemmas 4.1 and 4.2, respectively. This procedure improves the estimate of the required number of angular and radial nodes by a constant factor.

## 5. Numerical results

### 5.1. Accuracy results

In this section, we report numerical results for the accuracy of the fast algorithm compared to matrix multiplication. Recall that maps coefficients to images by

 (Ba)j=m∑i=1aiψi(xj)h,

and its adjoint transform maps images to coefficients by

 (B∗f)i=p∑j=1fj¯¯¯¯¯¯¯¯¯¯¯¯¯¯ψi(xj)h,

see § 1.3. By defining the matrix by

 B(i,j)=ψi(xj)h,

we can apply and by dense matrix multiplication to test the accuracy of our fast method. Since the size of the matrix scales like for images, constructing these matrices quickly becomes prohibitive so the comparison is only given up to , see Table 1, where