DeepAI
Log In Sign Up

Numerically computing the index of mean curvature flow self-shrinkers

07/12/2020
by   Yakov Berchenko-Kogan, et al.
0

Surfaces that evolve by mean curvature flow develop singularities. These singularities can be modeled by self-shrinkers, surfaces that shrink by dilations under the flow. Singularities modeled on classical self-shrinkers, namely spheres and cylinders, are stable under perturbations of the flow. In contrast, singularities modeled on other self-shrinkers, such as the Angenent torus, are unstable: perturbing the flow will generally change the kind of singularity. One can measure the degree of instability by computing the Morse index of the self-shrinker, viewed as a critical point of an appropriate functional. In this paper, we present a numerical method for computing the index of rotationally symmetric self-shrinkers. We apply this method to the Angenent torus, the first known nontrivial example of a self-shrinker. We find that, excluding dilations and translations, the index of the Angenent torus is 5, which is consistent with the lower bound of 3 from the work of Liu and the upper bound of 29 from our earlier work. Also, we unexpectedly discover two additional variations of the Angenent torus with eigenvalue -1.

READ FULL TEXT VIEW PDF
10/25/2022

Numerical surgery for mean curvature flow of surfaces

A numerical algorithm for mean curvature flow of surfaces with surgery i...
10/19/2020

Error analysis for a finite difference scheme for axisymmetric mean curvature flow of genus-0 surfaces

We consider a finite difference approximation of mean curvature flow for...
05/14/2021

A numerical stability analysis of mean curvature flow of noncompact hypersurfaces with Type-II curvature blowup

We present a numerical study of the local stability of mean curvature fl...
11/04/2019

Stable approximations for axisymmetric Willmore flow for closed and open surfaces

For a hypersurface in R^3, Willmore flow is defined as the L^2–gradient ...
02/07/2022

Numerical analysis for the interaction of mean curvature flow and diffusion on closed surfaces

An evolving surface finite element discretisation is analysed for the ev...
05/04/2016

The embedding dimension of Laplacian eigenfunction maps

Any closed, connected Riemannian manifold M can be smoothly embedded by ...
12/07/2022

Minimizing the Sombor Index among Trees with Fixed Degree Sequence

Vertex-degree-based topological indices have recently gained a lot of at...

1. Introduction

Mean curvature flow is a geometric evolution equation for surfaces , under which each point moves in the inward normal direction proportionally to the mean curvature of at . More generally, mean curvature flow can be defined as the gradient flow for the area functional. This flow has many applications in geometry and topology, as well as in image denoising.

Under mean curvature flow, spheres and cylinders will shrink by dilations. However, these are not the only examples of such self-shrinkers. Angenent found a self-shrinking torus in 1989 [2], and since then many other examples have been constructed [9, 10, 13, 14, 17, 18, 20, 21, 22]. These surfaces are important because they model singularities that develop under mean curvature flow. For example, if the initial surface is convex, then it will become rounder as it shrinks, eventually becoming close to a sphere before it shrinks to a point. On the other hand, if the initial surface is not convex, then it may develop a singularity that looks like a different self-shrinker.

Colding and Minicozzi [6] show that spheres and cylinders are stable, in the sense that if we perturb a sphere or a cylinder, then the resulting mean curvature flow will still develop a spherical or cylindrical singularity, respectively. Other self-shrinkers are unstable. Nonetheless, the space of unstable variations, defined appropriately, is still finite-dimensional. As in Morse theory, this dimension is called the index of the self-shrinker. Note, however, that there are different conventions for the index in the literature; the disagreement is about whether or not to include translations and dilations, which give rise to singularities of the same shape but at different places and times. We exclude translations and dilations in this work.

1.1. Results

In this paper, we compute the index of the Angenent torus. However, the computational methods in this paper apply equally well to any rotationally symmetric immersed self-shrinker. See [9] for infinitely many examples of such self-shrinkers. More generally, we expect that combining the methods in this paper with finite element methods could be used to compute the indices of self-shrinkers that do not have rotational symmetry.

Result 1.1.

Excluding dilations and translations, the index of the Angenent torus is .

Computing the index of a self-shrinker amounts to counting the number of negative eigenvalues of a certain differential operator called the stability operator

, which acts on functions that represent normal variations of the self-shrinker. In this paper, we construct a suitable finite-dimensional approximation to the stability operator, and then we compute the eigenvalues and eigenvectors of this matrix. We recover the facts

[6] that the variation corresponding to dilation has eigenvalue and that the variations corresponding to translations have eigenvalue . Additionally, we discover that, surprisingly, two other variations also have eigenvalue , and these variations have simple explicit formulas.

1.2. Relationship to other work

One can view this paper in several ways.

  • This paper can be viewed a sequel to [4], where we compute the Angenent torus and its entropy. These results are the starting point for the index computation in the current work.

  • This paper can be viewed as a numerical implementation of [16], where Liu gives a formula for the stability operator of rotationally symmetric self-shrinkers and shows, in particular, that the index of the Angenent torus is at least .

  • This paper can be viewed as a numerical companion to [5], where we prove upper bounds on the index of self-shrinking tori. The numerical discovery of the two new variations with eigenvalue inspired a simple new formula for the stability operator [5, Theorem 3.7]. In turn, this formula gives a two-line proof that these numerically discovered variations do indeed have eigenvalue [5, Theorem 6.1].

1.3. Outline

In Section 2, we introduce notation and give basic properties of mean curvature flow, self-shrinkers, and the stability operator, both in general and in the rotationally symmetric case. In Section 3

, we discuss the numerical methods we use to compute the eigenvalues and eigenfunctions of the stability operator. We present our results in Section 

4, listing the first several eigenvalues of the stability operator and presenting plots of the corresponding variations of the Angenent torus cross-section. We obtain the index by counting the variations with negative eigenvalues; we present three-dimensional plots of all of these variations in Figure 5. Next, in Section 5

, we estimate the error in our numerical computations by giving plots that show the rate at which our numerically computed eigenvalues converge as we increase the number of sample points. We summarize the first few computed eigenvalues and our error estimates in Table 

1. Finally, in Section 6, we give a few promising directions for future work. Additionally, we include Appendix A, where, rather than looking at small eigenvalues, we instead take a cursory look at eigenvalue asymptotics.

2. Preliminaries: Mean curvature flow and self-shrinkers

In this section, we introduce mean curvature flow for surfaces in , define self-shrinkers and their index, and discuss the rotationally symmetric case. For a more detailed introduction in general dimension, see the companion paper [5]. We also refer the reader to [6, 7, 12, 16].

2.1. Notation for surfaces and mean curvature flow

Let be an immersed oriented surface. Let

denote the unit normal vector to

. Given a point and a vector , let denote the scalar projection of onto , namely . Let denote the projection of onto , namely .

Definition 2.1.

Let denote the second fundamental form of . That is, given , let

Definition 2.2.

Let denote the mean curvature of , defined with the normalization convention .

That is, if is an orthonormal frame at a particular point , then, at that point , .

Definition 2.3.

A family of surfaces evolves under mean curvature flow if

That is, each point on moves with speed in the inward normal direction.

2.2. Self-shrinkers

A surface is a self-shrinker if it evolves under mean curvature flow by dilations. For this paper, however, we will restrict this terminology to refer only to surfaces that shrink to the origin in one unit of time.

Definition 2.4.

A surface is a self-shrinker if is a mean curvature flow for .

We have an extremely useful variational formulation for self-shrinkers as critical points of Huisken’s -functional.

Definition 2.5.

The -functional takes a surface and computes its weighted area via the formula

The role of the normalization constant is to ensure that if is a plane through the origin, then .

Since varying a surface in a tangential direction does not change the surface, we can define the critical points of solely in terms of normal variations.

Definition 2.6.

is a critical point of if for any with compact support, does not change to first order as we vary by in the normal direction. More precisely, if we let , then we have .

Proposition 2.7 ([6]).

is a self-shrinker if and only if is a critical point of .

The definition of the -functional is not invariant under translation and dilation: The Gaussian weight is centered around the origin in , and the length scale of the Gaussian is designed so that the critical surfaces become extinct in exactly one unit of time. Colding and Minicozzi introduce a related concept called the entropy, which coincides with the -functional on self-shrinkers but is invariant under translation and dilation.

Definition 2.8.

The entropy of a surface is the supremum of the -functional evaluated on all translates and dilates of , that is .

If is a self-shrinker, defined as above to shrink to the origin in one unit of time, then the supremum among translates and dilates is attained at itself, so the entropy of coincides with . However, entropy-decreasing variations of and -decreasing variations of are not quite the same: when we ask about entropy-decreasing variations, we exclude the “trivial” -decreasing variations of translation and dilation.

2.3. The stability operator

Given a critical point of a flow, the next natural question to ask is about the stability of that critical point. If we perturb a self-shrinker, will the resulting surface flow back to the self-shrinker under the gradient flow for the -functional, or will it flow to a different critical point? What is the maximum dimension of a space of unstable variations? As in Morse theory, answering this question amounts to computing the eigenvalues of the Hessian of the -functional.

Definition 2.9.

Let be a self-shrinker. The stability operator is a differential operator acting on functions that is the Hessian of in the following sense.

  • For any ,

    (1)

    where is the normal variation corresponding to , and

  • is symmetric with respect to the Gaussian weight, in the sense that .

The reason for the odd choice of sign is so that the differential operator

has the same leading terms as the Laplacian . More precisely, Colding and Minicozzi [6] compute that,

where

Consequently, we make the following sign convention for eigenvalues.

Definition 2.10 (Sign convention for eigenvalues).

We say that is an eigenfunction of a differential operator with eigenvalue if .

We conclude that eigenfunctions of the stability operator with negative eigenvalues are unstable variations of the self-shrinker : If we vary in that direction, then the gradient flow for will take the surface away from . Meanwhile, eigenfunctions of the stability operator with positive eigenvalues are stable variations of : There exists a gradient flow line that approaches from that direction.

Translating in the direction corresponds to the normal variation , and dilating corresponds to the normal variation . Colding and Minicozzi compute the corresponding eigenvalues of the stability operator.

Proposition 2.11 ([6]).

For any vector , we have . Meanwhile, for dilation, we have .

Thus, assuming these functions are nonzero, and are eigenfunctions of , giving us independent eigenfunctions. With our sign convention, the eigenvalue corresponding to is , and the eigenvalue corresponding to is . Additionally, because is invariant under rotations about the origin, a variation of corresponding to a rotation about the origin will be an eigenfunction with eigenvalue .

Because has the same symbol as , it has a finite number of negative eigenvalues, at least in the case of compact . Usually, one defines the index of a critical point of a gradient flow to be the number of negative eigenvalues of the Hessian. However, because translations and dilations do not change the shape of the self-shrinker, we exclude them in this context.

Definition 2.12.

The index of a self-shrinker is the number of negative eigenvalues of the stability operator , excluding those eigenvalues corresponding to translations and dilations.

Assuming that is not invariant under any translations or dilations, its index is simply less than the usual Morse index.

Under mild assumptions, Colding and Minicozzi show that the only self-shrinkers with index zero are planes, round spheres, and round cylinders [6]

2.4. Rotationally symmetric self-shrinkers

If is a hypersurface with rotational symmetry, we can understand it in terms of its cross-sectional curve . We refer the reader to [16].

We will use cylindrical coordinates on .

Definition 2.13.

We say that a hypersurface is rotationally symmetric if it is invariant under rotations about the -axis.

If is rotationally symmetric, we let denote its cross-section, which we also think of as being a curve in the half-plane .

We can write the -functional in terms of .

Proposition 2.14.

If is a rotationally symmetric hypersurface with cross-section , then

where denotes integration with respect to arc length.

To simplify our notation, we will let denote this weight.

Definition 2.15.

Let denote the weight

With this notation, our expression for is simply the length of the curve with respect to the conformally changed metric .

Definition 2.16.

Let denote the metric on the half-plane .

If is a self-shrinker, then is a critical point for , so is a critical point for -length. In other words, is a geodesic with respect to .

2.5. The stability operator for rotationally symmetric self-shrinkers

Varying the cross-section only yields rotationally symmetric variations of . To understand the stability operator in terms of , we must understand non-rotationally symmetric variations of as well. Liu [16] does so by decomposing normal variations into their Fourier components. The stability operator commutes with this Fourier decomposition, so we can decompose into its Fourier components , which are operators acting on functions on .

We begin with the rotationally symmetric part of the Fourier decomposition.

Definition 2.17.

Let be a rotationally symmetric self-shrinker with cross-section . For any , we have a corresponding rotationally symmetric function . Define the operator by

Note that, if is a rotationally symmetric function as above, then the normal variation has cross-section , and equation (1) in Definition 2.9 becomes

(2)

where denotes integration with respect to the usual Euclidean arc length.

We now define the other Fourier components of .

Definition 2.18.

For each non-negative integer , let be the operator

Proposition 2.19 ([16]).

For any and any , we have

Thus, the eigenfunctions of are of the form and , where is an eigenfunction of . Consequently, we can determine the eigenvalues and eigenfunctions of the stability operator by determining the eigenvalues and eigenfunctions of for all .

In the case where is a rotationally symmetric torus, we have the following formula for .

Proposition 2.20 ([5]).

Let be a rotationally symmetric torus with cross-section . Then

where is the weight and is the Laplacian on with respect to the conformally changed metric .

See also Appendix A, where we rewrite as a Schrödinger operator, and [16], where Liu gives the general formula for .

3. Numerical methods

In [4], we computed the Angenent torus cross-section by viewing it as a geodesic with respect to the metric . The result is a discrete approximation to the curve , where the are equally spaced with respect to the metric , as illustrated in Figure 1. As we will see, having the points be equally spaced in this way is particularly well-suited for computing the index of the Angenent torus. Additionally, we computed the entropy of the Angenent torus, which is the length of the curve with respect to the metric .

Figure 1. The Angenent torus cross-section (blue curve) and the discrete approximation to it (orange dots) computed in [4].

We now proceed to compute a matrix approximation to the differential operator , using equation (2). We can first rewrite this equation in terms of the metric .

Proposition 3.1.

Let be the cross-section of a self-shrinking torus, and let . Then

(3)

where is the normal variation corresponding to , the expression denotes the length of the curve with respect to the conformally changed metric , and denotes integration with respect to arc length.

In other words, with respect to the metric , the operator is the Hessian of the length functional.

The task now is to make discrete approximations to all of the terms in equation (3). We begin by approximating length the same way as in [4].

3.1. Discrete length and its Hessian

Definition 3.2.

Given two points and , we approximate the distance between them with respect to the metric by setting to be evaluated at the midpoint , and then computing the discrete approximation to the distance to be

(4)

where denotes the usual Euclidean norm.

Definition 3.3.

Given a discrete curve that approximates a curve , we approximate its length with the discrete length functional

(5)

The computation of the Angenent torus cross-section in [4] is set up so that is a critical point for the functional , analogously to the fact that the true cross-sectional curve is a critical point for the length functional . The corresponding critical value is an approximation of , which is the entropy of the Angenent torus.

Thus, we can approximate the left-hand side of (3) with

where is a variation of the discrete curve . More precisely, , where for a sequence of vectors representing a discrete vector field on .

We can think of as giving us the Hessian of .

Definition 3.4.

Let be a discrete curve. Viewing as a function , let denote the Hessian of at , a matrix.

Then, by definition of the Hessian, if we view as a vector in , we have

(6)

3.2. The outward unit normal of a discrete curve

Recall, however, that we restricted our attention to normal variations of , because tangential variations do not change length. In other words, we considered only those variations where for some function . To do the same for our discrete variations of our discrete curve , we must appropriately define a normal vector field .

In this situation, there is a very natural way to do so by considering what happens to the discrete length if we vary a single point of while leaving all of the other points fixed. Because is a critical point for , the first derivative is zero. Meanwhile, the second derivative is the th submatrix on the diagonal of the Hessian , which we denote by .

We expect that if we move in a tangential direction, either towards or , then will not change very much. On the other hand, if we move in a normal direction, then will increase. Thus, we expect to have an eigenvalue close to zero, whose eigenvector approximates the direction tangent to the curve, and a positive eigenvalue, whose eigenvector approximates the normal direction tangent to the curve. We obtain the following definition of the normal vector field .

Definition 3.5.

Let be a discrete curve. Let be the blocks along the diagonal of . We define the outward unit normal vector field by letting be the unit eigenvector corresponding to the larger eigenvalue of , with the sign chosen so that points outwards.

A discrete function is just a discrete set of values , with . Now that we have our outward unit normal vector field, we can define . Viewing as a vector in and as a vector in

, this formula defines a linear transformation

.

Definition 3.6.

The matrix is an block diagonal matrix whose diagonal blocks are the .

Using , we can rewrite equation (6) as

(7)

where the variation is defined by .

3.3. Integrating over a discrete curve

We now turn to the right-hand side of (3). To approximate it, we must first understand how to approximate integration with respect to arc length, which is relatively straightforward because the points are equally spaced with respect to arc length. We can view a discrete function as an approximation of a function . Intuitively, because the are equally spaced, the should be weighted equally, so the average value of the should approximate the average value of . In other words, we expect

To understand this approximation more precisely, we can let denote the parametrization of with respect to arc length. Because the are equally spaced, setting , we have that is an approximation for , and hence is an approximation for . Thus,

3.4. The discrete stability operator

Applying these approximations to (3), we can approximate with a discrete operator .

Definition 3.7.

Viewing a discrete function as an element of , let be the symmetric matrix satisfying

for all , where is defined by .

Proposition 3.8.
Proof.

Using equation (7) for the left-hand side and rewriting the right-hand side, we have

The result follows using the fact that the Hessian is symmetric. ∎

From here, it is easy to approximate the th Fourier component of the stability operator. Recall that . Letting denote the -coordinate of , we can approximate the operator with the diagonal matrix whose entries are .

Definition 3.9.

Let be the matrix defined by

where is the diagonal matrix whose entries are the .

By computing the eigenvalues of these matrix approximations of , we can estimate the eigenvalues of . Counting the number of negative eigenvalues will give us the index of the Angenent torus.

3.5. Implementation

In practice, we do not compute the Hessian all at once. Instead, our first step is to use sympy to symbolically compute the Hessian of , giving us a matrix of expressions. Then, for each , we evaluate these expressions at , giving us a matrix that we denote by . While we could assemble the into the full matrix by placing the blocks in appropriate locations along the diagonal and adding them together, we instead first compute the normal vectors and use them to reduce the dimension of the problem.

Recall that we compute the normal vector by seeing how changes as we vary the single point . Since are the only two terms in where appears, we only need to know and . Adding the bottom right block of to the top left block of , we obtain the matrix from Definition 3.5, so is the unit eigenvector corresponding to the larger eigenvalue of this matrix.

Next, we would like to reduce the dimension of by considering normal variations only. To do so, we assemble and into a block diagonal matrix , from which we obtain the matrix

Essentially, represents how the distance changes if we vary and in the normal directions, weighted by .

Finally, we assemble the matrices into the matrix by placing into the block formed by the th and st rows and columns, and then summing over . From here, it is an easy matter to obtain by using the -coordinates of the to assemble the diagonal matrix . Once we have the matrices , we can compute their eigenvalues and eigenvectors with numpy.

See the supplementary materials for a Jupyter notebook implementation.

4. Results

Figure 2. The first few eigenvalues and eigenfunctions of . The eigenfunctions are pictured as variations (orange dashed) of the Angenent torus cross-section (blue solid). The variation corresponding to is dilation, and the variation corresponding to is vertical translation.

Figure 3. The first few eigenvalues and eigenfunctions of . The variation corresponding to is [5, Section 6]. The variation corresponding to is horizontal translation, and the variation corresponding to is rotation about the origin.

Figure 4. The first few eigenvalues and eigenfunctions of .

We estimate the first few eigenvalues and eigenfunctions of by computing the eigenvalues and eigenvectors of with . We present these variations of the Angenent torus cross-section in Figures 24. Recall that for , each eigenfunction of corresponds to two eigenfunctions and of the stability operator . The variations with negative eigenvalues, excluding translation and dilation, contribute to the index. We present three-dimensional plots of the variations with negative eigenvalues in Figure 5. There are such variations. One of the variations is dilation, and three are translations, so the entropy index of the Angenent torus is .

Figure 5. The Angenent torus (top row) and its variations with negative eigenvalues. In the first column, we have dilation with eigenvalue and vertical translation with eigenvalue . In the second column, we have the pair of variations with eigenvalue discussed in [5, Section 6], and the two horizontal translations with eigenvalue .

5. Error analysis

To have confidence in our index result, we must approximate the eigenvalues of to sufficient accuracy to know that values we compute to be negative are indeed negative, and that the lowest computed positive eigenvalue of each is indeed positive. Eigenvalues close to zero could pose a problem, but the only such eigenvalue is for . Fortunately, for this eigenvalue, we know that its true value is exactly , corresponding to the variations that rotate the Angenent torus about the or axes.

To estimate the error in our values, we perform our computation with different numbers of points . Specifically, we use . When we know the true value, we can directly see how fast our estimate converges by plotting the logarithm of the error against the logarithm of . When the true value is unknown, we estimate it with the value that results in the best linear fit on a log-log plot. We present our results in Figure 6 and Table 1

Figure 6. Log-log plots showing the rate at which the error in our eigenvalue estimates decreases as the number of points increases.
value computed
with
true value (known or
estimated from fit)
error
Table 1. For the first four eigenvalues of each of the first four , we give the error between the eigenvalue computed with and either the true value if known or our estimate of the true value based on the fits in Figure 6.

The slopes of the best fit lines in Figure 6 are all between and , suggesting a quadratic rate of convergence. This rate of convergence matches the expected quadratic rate of convergence for the entropy of the Angenent torus that we found in [4]. We observe that the computed eigenvalues sometimes overestimate and sometimes underestimate the true value, and that the magnitude of the error varies. In all cases, however, there is by far more than enough accuracy to determine the sign of the eigenvalues. Because the th eigenvalue of increases with both and , we know that Table 1 lists all of the negative eigenvalues; there can be no others.

6. Future work

There are several directions for future work, of varying levels of complexity.

6.1. Other rotationally symmetric self-shrinkers

The methods in [4] and in this paper can be immediately applied to any other rotationally symmetric surface, of which there are infinitely many examples [9]. Mramor’s work [19] suggests that the entropy of these examples should grow to infinity. Meanwhile, our work [5] shows that the index should grow at least linearly with the entropy, but our upper bound on the index allows for faster growth. Numerically computing the entropies and indices of these examples could give some insight about whether these index bounds are optimal in terms of asymptotic growth rate, or if they could be improved.

6.2. Self-shrinkers without symmetry

Rotational symmetry allows us to reduce the dimension of the problem: Rather than computing variations of a critical surface in

, we can compute variations of a critical curve in the half-plane, which allows us to work with ordinary differential equations rather than partial differential equations. However, by using numerical methods for working with partial differential equations, we could analyze the general problem without rotational symmetry in much the same way. Namely, we could discretize the problem by triangulating the surface. We could then approximate the

-functional by summing the weighted areas of the triangles, giving us a functional on a finite-dimensional space. Finally, we could compute the critical points of this functional and compute the Hessian.

6.3. Error analysis

We have strong numerical evidence that the values we obtained in [4] and in this paper are accurate. When true values are known, our methods find them. Even when true values are unknown, we observe a quadratic rate of convergence as we increase the number of points. Additionally, the value of the entropy in [4] has since been reproduced using different numerical methods [3]. Nonetheless, numerical evidence does not constitute a proof, so it would be good to prove error bounds on our estimates.

The starting point would be to consider our estimate for the distance between two points and in the half-plane with respect to the metric . We would like to bound the difference between this estimate and the true distance . One can do so by computing the Taylor polynomials of the distance squared at the diagonal of . From there, we could bound the error for the length functional, critical curve, Hessian, and so forth. One caveat is that variations of the discrete curve do a poor job of capturing variations of the true curve that are highly oscillatory, as we illustrate in Appendix A. This issue is resolved by the fact that, as we show in [5], highly oscillatory variations must increase length and therefore cannot contribute to the index.

References

Appendix A Eigenvalue asymptotics

For computing the index, we were concerned with eigenvalues of for small values of and . In this appendix, we go in the other direction and take a cursory look at the asymptotic behavior as either or becomes large.

In Figure 7, we extend Figure 2, showing the next few eigenvalues and eigenfunctions of . We see that the eigenfunctions resemble the vibrational modes of a string. Meanwhile, in Figure 8, we show the eigenfunction corresponding to the least eigenvalue of for the first several values of . We see that the eigenfunctions become concentrated at the outermost point of the torus cross-section.

Figure 7. A continuation of Figure 2, showing the eigenvalues and eigenfunctions of .

Figure 8. The lowest eigenvalue and corresponding eigenfunction of for the first several values of .

To understand this behavior in greater detail, we apply a Liouville transformation [1, 15] to the formula for the operator given in Proposition 2.20. As before, we let the variable parametrize the cross-sectional curve with respect to arc length. We also let the variable parametrize the curve with respect to Euclidean arc length. We use the notation and . Note that , so . With this notation, Proposition 2.20 tells us that is an eigenfunction of if

(8)

Applying the Liouville transformation , we can compute that is a solution to equation (8) if and only if is a solution to

(9)

We can identify equation (9) as a Schrödinger equation

with Hamiltonian

where is the potential

a.1. Asymptotics for large

For high-frequency modes when is large, the kinetic energy term is more significant than the bounded potential energy term . Based on this intuition and on [1, 11], we can approximate with its average value

where is the length of with respect to the Euclidean metric. Thus, we have

and so the eigenvalues of are approximately

Based on [1, 11], we expect this approximation to be accurate to .

Using our discrete curve , we can approximate in a straightforward way. Since the points of are equally spaced with respect to , we can approximate derivatives and integrals with respect to using differences and sums. We can easily compute along the discrete curve, and then using and , we can approximate derivatives and integrals with respect to as well.

With this approximation for , we can look at how well our numerically computed eigenvalues and of compare with the estimate . We illustrate our findings in the case in Figure 9.

Figure 9. In the upper plot, we plot the eigenvalues (green dots) and (orange dots) of our approximation to the stability operator , along with our asymptotic approximation (blue curve). In the lower plot, we plot the difference between the eigenvalues of and the asymptotic approximation.

We observe that the relative error between our numerically computed eigenvalues and our asymptotic estimate is small. However, in absolute terms, we observe that our numerically computed eigenvalues eventually start falling below the asymptotic curve as , in contrast to the convergence suggested by [1, 11]. The explanation is that [1, 11] considers continuous systems, so the results apply to the eigenvalues of . However, as our eigenfunctions oscillate faster, the discrete nature of our system becomes more apparent, and so, as we look at larger and larger eigenvalues, the eigenvalues of start to drift away from the corresponding eigenvalues of .

The phenomenon that large eigenvalues of a discrete system deviate from the eigenvalues of the corresponding continuous system appears to be well-known; see for example [8, Section 2.4] and [23]. Either reference has an example where the eigenvalues grow as . This expression approaches as , but, for fixed , it deviates from as , the same rate that we observe in our work. Note, however, that the coefficient in front of that we empirically observe is close but not equal to the coefficient that we would expect from the formula . We believe that the disparity is due to the fact that, in both of these references, the points of the discrete system are equally spaced, whereas in our numerical computation, our points are not equally spaced with respect to .

a.2. Asymptotics for large

We now turn our attention to the behavior of the th eigenvalue of for large . In this setting, the potential energy in is more significant than the kinetic energy . We restrict our attention to , in which case the minimum of occurs at the outermost point of the torus cross-section thanks to the term in . As we expect, we see in Figure 8 that our wave functions are concentrated near the minimum of the potential . Moreover, as grows, the potential well becomes steeper, so, as expected, the wave functions become more concentrated as grows.

As before, we parametrize with respect to arc length using the variable and with respect to Euclidean arc length using the variable . Additionally, we will let the outermost point of the cross-section correspond to . Near this point, we approximate with a quadratic potential

which gives the Hamiltonian for the quantum harmonic oscillator

The lowest energy state for the quantum harmonic oscillator is slightly larger than the minimum value of the potential energy. More precisely, we have

More generally,

We expect this approximation to become better as grows. Intuitively, the potential well becomes steeper, so the eigenfunction becomes more concentrated near the minimum, so it doesn’t “see” as well how differs from its quadratic Taylor approximation.

Figure 10. In the upper plot, we plot the eigenvalues (orange dots) of our approximation to the stability operator , along with our asymptotic approximation (blue curve). In the lower plot, we plot the difference between the eigenvalues of and the asymptotic approximation.

Specializing to , we can now look at how well our numerically computed eigenvalues of compare with the estimate . We plot our findings in Figure 10. We see that, as before, the relative error between our numerically computed eigenvalues and our asymptotic estimate is small. This time, the absolute error is also small, but it still does not tend to zero. This error is sensitive to the number of points , so we suspect that, once again, the behavior of the discrete system is deviating from the behavior of the continuous system. As the region supporting the bulk of the wave function becomes smaller, there become fewer points of the discrete curve in that region.

An additional culprit could be the naïve way in which we approximated . We simply approximated and its derivatives using finite differences as above, and then evaluated them at the point on the discrete curve where attains its minimum. In principle, however, we could use either the differential equations defining