Decomposing a function into its Fourier series can be viewed as representing a function in the eigenfunctions of the Laplacian on the torus where and are identified. Indeed,
The eigenfunctions of the Laplacian (harmonics) on the disk that satisfy the Dirichlet boundary conditions can be written in polar coordinates as
where is a normalization constant, is the -th order Bessel function of the first kind, and is the -th smallest positive root of . The indices run over . The functions satisfy
In this paper, we present a fast and accurate transform of digitized images into this eigenfunction basis, often referred to as the Fourier-Bessel basis. For computational purposes, this basis is convenient for a number of reasons:
Orthonormal: these eigenfunctions are an orthonormal basis for square integrable functions on the disk.
Ordered by frequency:
the basis functions are ordered by eigenvalues, which can be interpreted as frequencies due to the connection with the Laplacian and Fourier series described above; therefore, low-pass filtering can be performed by retaining basis coefficients up to a given threshold.
Steerable: functions expanded in the basis can be rotated by applying a diagonal transform corresponding to phase modulation of the coefficients.
Fast radial convolutions: we show that the convolution with radial functions can be computed by applying a diagonal transform to the coefficients.
Our method involves operations and has precise accuracy guarantees. Python code that implements our method is publicly available online111An implementation is available at https://github.com/nmarshallf/fle_2d.. To the best of our knowledge, existing methods for computing the expansion coefficients in a steerable basis either require operations [19, 20] or lack accuracy guarantees [34, 35].
, including applications to machine learning[5, 7, 33] and data-driven science, such as applications to cryo-electron microscopy (cryo-EM) [6, 23]
, and computer vision, among other areas.
There are many possible choices of steerable bases, for instance prolate spheroidal wave functions which are bandlimited functions optimally supported on the disk [19, 20, 28], or Zernike polynomials which are widely used in optics . The harmonics on the disk (which satisfy Dirichlet boundary conditions) [34, 35] are one natural choice due to their orthogonality, ordering by frequency, and fast radial convolution.
We illustrate the frequency ordering property of the Laplacian eigenbasis by performing a low-pass filter by projecting onto the span of eigenfunctions whose eigenvalues are below a sequence of bandlimits that decrease the number of basis functions successively by factors of four, starting from coefficients. Since the basis is orthonormal this is equivalent to setting coefficients above the bandlimit equal to zero; see Fig. 1 (a–d). Further, we demonstrate the radial convolution property by illustrating the convolution with a point spread function, which is a function used in computational microscopy ; see Fig. 1 (e–f). The image used for this example is a tomographic projection of a 3-D density map representing a bio-molecule (E. coli 70S ribosome) .
Suppose that is an image representing a function that is supported on the unit disk . More precisely, we assume that
for . Let be the number of pixels in the image , and let
We can consider the image
as a vectorwhose -th entry is defined by
Similarly, for a fixed bandlimit parameter , let
In the following, we consider a linear transformationwhich maps coefficients in to images in , and its adjoint which maps images in to coefficients in .
1.3. Main result
We consider the linear transform which maps coefficients to images by
and its adjoint transform which maps images to coefficients by
where is a normalization constant, chosen such that is close to the identity. In this paper, we present a fast and accurate method to apply and . In particular, the presented method can apply the operators and to vectors in operations for any fixed relative error . We again emphasize that this is in contrast to previous results, since to the best of our knowledge, existing methods for computing the expansion coefficients in a steerable basis either require operations [19, 20] or lack accuracy guarantees [34, 35]. The application of the operators and can be used in an iterative method to determine least squares optimal expansion coefficients for a given image. Alternatively, applying to
can be viewed as estimating the continuous inner products that define the coefficients by using quadrature points on a grid (and potentially quadrature weights to provide an endpoint correction).
We report the relative errors of our fast method for applying and compared to matrix multiplication for images with up to pixels with varying values of , and report timings for images with up to pixels. Moreover, we present a numerical example involving rotations, radial convolutions, and deconvolutions.
The remainder of the paper is organized as follows. In §2 we describe the analytical apparatus underlying the method. In §3 we describe the computational method. In §4 we justify why the discretization of the analytical apparatus performed by the computational method achieves the specified precision. In §5 we present numerical results. In §6 we discuss implications of the method and potential extensions.
2. Analytical apparatus
The eigenfunctions of the Laplacian on the unit disk (that satisfy Dirichlet boundary conditions) defined in (1) can be extended to as functions supported on the unit disk by
for , where denotes an indicator function for . For the sake of completeness, we note that the normalization constants which ensure that are defined by
We use the convention that the Fourier transformof an integrable function is defined by
where denotes the Euclidean inner product. We define the convolution of two functions by
Furthermore, we will make use of the identity
see for example [29, Eq. 9.19]. To verify this identity, recall that is the solution to Bessel’s equation
see [9, Eq. 10.2.1]. Observe that
Since taking the second derivative of a function multiplies its -th Fourier coefficient by , it follows that
2.2. Fourier transform of eigenfunctions
The analytic foundation for the presented fast method is the following expression for the Fourier transform of the functions defined in (5).
The Fourier transform can be expressed by
where are polar coordinates for .
We prove this lemma for completeness.
2.3. Coefficients from eigenfunction Fourier transform
Next, we observe how the coefficients of a function in the eigenfunction basis can be computed by an application of Lemma 2.1. In the following, we will write the arguments of Fourier transforms of functions in polar coordinates . We have the following result:
Suppose that is a finite index set, and set
where are coefficients. Define by
It then holds that
The proof is a direct consequence of Lemma 2.1.
Proof of Lemma 2.2.
Observe that (11) implies
where if and otherwise. Evaluating (15) at radius gives
where the final equality follows from the orthogonality of the eigenfunctions , (which is a consequence of the fact that the Laplacian is self-adjoint). By the definition of in (13), this implies that
which concludes the proof. ∎
Remark 2.1 (Special property of Bessel functions).
We emphasize that the integral expression (8) of the Bessel function is crucial for the fast method of this paper. The possibility of extending the approach to create other fast transforms defined on domains in therefore hinges on identifying equally useful integral expressions for the corresponding transforms.
2.4. Convolution with radial functions
Let be a radial function. In this section, we observe how the convolution with can be computed via a diagonal transform of the coefficients. More precisely, we compute the projection of the convolution with onto the span of any finite basis of the eigenfunctions .
Let be a function with coefficients as in (12), and be a radial function. We have
where denotes the orthogonal projection onto the span of .
The proof is a direct application of Lemma 2.2.
Proof of Lemma 2.3.
We use the notation and . Since the functions are an orthonormal basis, in order to compute the orthogonal projection , it suffices to determine the coefficients of with respect to for . Since , and is radial, we have
3. Computational method
Recall that is the number of pixels in the given image, and is the number of functions in our expansion. Throughout, we write all computational complexities in terms of with the understanding that . This assumption is not restrictive since it does not make sense to expand an image using more functions than the number of pixels.
Remark 3.1 (Precision of calculations).
The computational complexities also have a (logarithmic) dependence on the reciprocal of the desired precision ; for simplicity, we treat as a fixed constant, say, , and do not include it in complexity statements.
Remark 3.2 (Continuous to discrete).
In the following, we describe how to apply the operators and defined above in §1.3 in operations. For the purpose of exposition, we start by describing a simplified method before presenting the full method. The section is organized as follows:
3.3. Simplified method
In this section, for the purpose of exposition, we present a simplified method that applies and in operations. Recall that are the pixel values of the image, are the pixel coordinates, and are the eigenfunctions below the given bandlimit, see §1.2. Let denote the Bessel function roots (square root of the eigenvalues) corresponding to . Further, suppose that are the indices such that and .
Fix a positive integer (see §4.3 for a justification of this parameter choice). We first describe how to apply which is defined by
[label=Step 0., parsep=2ex,wide=0pt]
Using the type-2 2-D NUFFT, in operations, compute:
for to precision .
Using the FFT, in operations, compute:
By Lemma 2.2 it follows that
where is the integer such that and .
In the following remarks, we discuss how reversing these steps provides an immediate way to apply and discuss limitations of the simplified method.
Remark 3.3 (Applying ).
Note that each step of the algorithm consists of applying a linear transform whose adjoint can be applied in a similar number of operations.
Indeed, the adjoint of Step 1 (which uses type-2 NUFFT) is a type-1 2-D NUFFT , and the adjoint of Step 2 (which uses a standard FFT) is an inverse FFT. Step 3 (interpolation) can be computed in a variety of ways (including by using the NUFFT or FFT and sparse matrices, see Remark 3.5) which each have similarly fast adjoints [11, 14, 22]. Therefore, in order to apply , which is defined by
we can just apply the adjoint of each step in the reverse order.
Remark 3.4 (Limitations of simplified method).
The problem with the simplified method is that the first step is prohibitively expensive since we want a method that works in operations. The key to overcome this limitation is to observe that are samples from analytic functions, and to add the additional ingredient of fast interpolation from Chebyshev nodes.
3.4. Summary of fast method
In this section, we describe how the method of the previous section can be improved from to by using fast interpolation from Chebyshev nodes. Recall that Lemma 2.2 defines
and states that the coefficients can be computed by evaluating the appropriate at . The problem with the simplified method is the first step: computing for and a number of points so that we are sampling at the Nyquist rate is a set of points, which is already prohibitive. Fortunately, there is a simple solution to this issue: for each , the functions are analytic functions of which can be interpolated to determine the coefficients, if they are tabulated at appropriate points; see §4.2 for a discussion of why points are sufficient. Here is an informal summary of the fast method:
Compute the Fourier transform of at
for Chebyshev nodes and angles .
Compute for the Chebyshev nodes and frequencies .
For each of the frequencies, use fast interpolation from the Chebyshev nodes to the Bessel function roots associated with each frequency . We illustrate the interpolation step in Fig. 2.
3.5. Detailed description of fast method
Recall that are the eigenfunctions below our bandlimit, and are the associated Bessel function roots (which are the square root of the associated eigenvalues). Assume that are listed in ascending order so that they are contained in the interval . Fix , where is the desired relative error, see §4.2 for a justification of this parameter choice. Recall that Chebyshev nodes of the first kind for the interval are defined by
for . The fast method follows the same strategy as the simplified method. The key difference is that the fast method replaces the final step with fast interpolation, and therefore evaluates the Fourier coefficients on a smaller number of nodes (the Chebyshev nodes).
[label=Step 0., parsep=2ex, wide=0pt]
Using the NUFFT, in operations, compute:
Using the FFT, in operations, compute:
Using fast interpolation from Chebyshev nodes, in operations, compute:
for where is the integer such that .
As detailed in Remark 3.3, the fact that each step is a linear operator whose adjoint can be applied in a similar number of operations provides a similar fast method to apply .
Remark 3.5 (Methods for fast interpolation from Chebyshev nodes).
Theoretically, the most straightforward way to perform fast interpolation from Chebyshev nodes is to consider the function values at the Chebyshev nodes as function values associated with equally spaced points on a half circle (by a change of variables). Then the fast interpolation (called spectral interpolation) can be computed by using the NUFFT in operations or from arbitrary nodes using the Fast Multipole Method, see . Practically speaking, since the NUFFT run-time constant is high, choosing a fixed number of source points centered around the target points (say source points) and then applying a precomputed sparse (barycentric interpolation ) matrix may be more practical; sparse interpolation can be used in combination with standard spectral interpolation (using the discrete cosine transform) to double or quadruple the number of Chebyshev nodes before the sparse interpolation step.
4. Continuous to discrete
4.1. Maximum bandlimit
In this section, we derive a bound for the bandlimit parameter, which controls the number of basis functions used in the expansion. The number of basis functions should not exceed the number of pixels of the image corresponding to points within the unit disk. In combination with Weyl’s law, this leads to a bound on the maximum bandlimit.
In more detail, from Weyl’s law, the number of Dirichlet eigenvalues of the Laplacian in the disk is bounded by is
see . Similarly, the inscribed circle in an image has radius . The number of pixels contained inside this circle is therefore
Practically speaking, it can be advantageous to expand the image in fewer basis functions than described by (18), but, in any case, is a natural choice and different motivations exist in the literature .
Remark 4.1 ( FFT Bandlimit heuristic).
One heuristic for setting the bandlimit is based on the fast Fourier transform (FFT). For a centered FFT on a signal of length, the maximum frequency is which corresponds to a bandlimit of . Note that
so this FFT bandlimit heuristic does indeed produce a reasonable bandlimit below the bound (18) derived from Weyl’s law. We use this bandlimit for our numerical experiments. The computational complexity and accuracy guarantees of the method presented in this paper hold for any bandlimit . However, the fact that the fast method performs interpolation in Fourier space inside a disk bounded by the maximum bandlimit provides additional motivation for this FFT-based heuristic since it will ensure that the disk will be contained within the square in frequency space used by the two-dimensional FFT.
4.2. Number of radial nodes
The following lemma shows that Chebyshev nodes are sufficient for accurate interpolation.
When running the algorithm on an image with pixels at the maximum bandlimit, choosing the number of Chebyshev nodes to satisfy
achieves relative error less than in the interpolation step of §3.5.
It will be clear from the proof that the constant in the statement of the lemma is an overestimate, see Remark 4.2 for a discussion of how this constant can be improved.
Proof of Lemma 4.1.
When interpolating a smooth differentiable function defined on the interval using an interpolating polynomial at Chebyshev nodes, the residual term can be written as
where ; indeed, this can be deduced from [9, § 3.3]. If we apply this result with , the residual satisfies
where the final inequality follows from the bound ; see § 4.1. In order to apply this bound to , we estimate
where denotes the th derivative of defined in (13). Since is compactly supported, its Fourier transform is analytic and the are therefore all finite. More precisely, the term can be bounded uniformly by using
We write out
where the third equality uses the fact that is supported on the disk of radius and the last inequality uses Cauchy-Schwarz applied to multiplied by the indicator function of the unit disk. It follows that
Therefore, in order to achieve relative error , it suffices to set such that
It follows from Stirling’s approximation [9, 5.11.3] that
Setting and solving for gives
is sufficient to achieve relative error less than . ∎
4.3. Number of angular nodes
The following lemma shows that angular nodes is sufficient to achieve relative error .
When running the algorithm on an image with pixels at the maximum bandwidth, choosing the number of equispaced angular nodes to satisfy
achieves relative error less than in the FFT step of §3.5.
As above, we emphasize that the constant in the statement of this result is an overestimate. See 4.2 for discussion about how this constant can be improved.
Proof of Lemma 4.2.
We will require a classical result, see for example : suppose that is a smooth periodic function on the torus where and are identified. Then
for all , where .
Indeed, we start by writing and use the fact that is equal to if is a multiple of and is otherwise to obtain:
Multiplying and dividing terms by , and using Cauchy-Schwarz:
where the last inequality follows from Parseval’s identity and the identity , together with .
In order to determine the required number of angular nodes we write the image and its Fourier transform as
By the triangle inequality, it suffices to estimate integrals of functions of the form
to precision for all , , , and . We have the estimate
Thus, in order to integrate these functions, we need to choose such that
It follows that choosing
which implies that the maximum angular frequency . We conclude that
is sufficient to achieve error ; using the bound from (19) completes the proof. ∎
Remark 4.2 (Improving estimates for number of radial and angular nodes).
While Lemmas 4.1 and 4.2 show that the number of radial nodes and angular nodes are , the constants in the lemmas are not optimal. For practical purposes, choosing the minimal number of nodes possible to achieve the desired error is advantageous to improve the run time constant of the algorithm, and it is clear from the proofs how the estimates can be refined. For Lemma 4.1 we set , and motivated by (21) compute
for and choose the smallest value of such that . Then, it follows that and can be replaced by and , in the statements of Lemmas 4.1 and 4.2, respectively. This procedure improves the estimate of the required number of angular and radial nodes by a constant factor.
5. Numerical results
5.1. Accuracy results
In this section, we report numerical results for the accuracy of the fast algorithm compared to matrix multiplication. Recall that maps coefficients to images by
and its adjoint transform maps images to coefficients by
see § 1.3. By defining the matrix by
we can apply and by dense matrix multiplication to test the accuracy of our fast method. Since the size of the matrix scales like for images, constructing these matrices quickly becomes prohibitive so the comparison is only given up to , see Table 1, where