Convergence analysis of subdivision processes on the sphere

01/26/2020 ∙ by Svenja Hüning, et al. ∙ 0

This paper provides a strategy to analyse the convergence of nonlinear analogues of linear subdivision processes on the sphere. In contrast to previous work, we study the Riemannian analogue of a linear scheme on a Riemannian manifold with positive sectional curvature. Our result can be applied to all general subdivision schemes without any sign restriction on the mask.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Subdivision schemes are iterative refinement algorithms used to produce smooth curves and surfaces. For data lying in linear spaces those refinement rules are well-studied and find applications in various areas ranging from approximation theory to computer graphics. We recommend [8, 1] and [19] for an introduction and good overview on this topic.

This paper contributes new results to the convergence analysis of subdivision schemes applied to data lying in nonlinear spaces, such as Lie groups, symmetric spaces or Riemannian manifolds. Several different methods to transfer linear schemes to nonlinear geometries have been studied (see [13] for an overview). We mention the log-exp-analogue which uses the exponential map and the linear structure of the tangent space by [2] and [20] as well as the projection analogue which can be applied to surfaces embedded into an Euclidean space, see [26] and [12]. Generally, such methods are only locally well-defined. The Riemmanian analogue which is obtained from a linear scheme by replacing affine averages by weighted geodesic averages can be made globally well-defined on complete Riemannian manifolds with nonpositive sectional curvature [15, 23, 9, 10]. The well-definedness is based on the existence and uniqueness of the so-called Riemannian center of mass which has been studied in various contexts [16, 21, 3, 4, 18].

Having transferred the linear scheme to nonlinear geometries questions of their properties arise naturally. While the smoothness of resulting limit curves is already thoroughly investigated, see e.g. [24, 11, 22, 26, 13, 17], the situation is different for the convergence analysis of nonlinear analogues of linear schemes. Results in literature based on so-called proximity conditions are limited to ‘dense enough’ input data. Convergence results for all input data are given for univariate interpolatory schemes [25], multivariate schemes with nonnegative mask [9, 10] or schemes which are obtained from linear ones by choosing binary geodesic averages instead of linear binary averages [6, 7]. In [15] we proved that the Riemannian analogue of a univariate linear scheme on complete Riemannian manifolds with nonpositive sectional curvature converges if the linear scheme converges uniformly. This result was previously only known for schemes with nonnegative mask [23, 9, 10].

In this paper, we extend our work to a convergence results for refinement algorithms on the sphere. It turns out that even for this rather elementary manifold the situation is appreciably different from the case of nonpositive curvature.

The paper is organised as follows. We start by repeating some basic facts about linear subdivision and introduce our notation. Next, we discuss the Riemannian analogue of a linear scheme on positively curved spaces. In particular, we discuss well-definedness on the sphere. Afterwards, we introduce our strategy to prove convergence. To make this rather technical part easier to understand we illustrate our computations in terms of a major example (the cubic Lane-Riesenfeld scheme). The last part contains the convergence analysis of several well-known subdivision schemes.

2. Linear subdivision and its Riemannian analogue

We start by recalling basic concepts. Let be a sequence of points in a linear space. We refer to this sequence as our input data. A linear, binary subdivision rule maps the input data to a new sequence where

The sequence of coefficients is called the mask of the scheme. Throughout this paper we assume that the mask is finitely supported, i.e., for only finitely many . A subdivision scheme is the repeated application , , , of the subdivision rule. Affine invariance, resp. translation invariance, of a linear subdivision scheme is expressed by

(1)

It is a necessary condition for its convergence [1, 8]. From now on, we assume that all considered schemes are affine invariant.

We call a linear subdivision scheme convergent, if there exist piecewise linear functions with which converge, uniformly on compact sets, to a continuous limit.

Denote by a manifold with Riemannian metric. The distance of two points , is bounded by the length over all with and , i.e.

To adapt a subdivision scheme to data lying in nonlinear geometries we replace affine averages by geodesic averages. Therefore, we observe that the point can be equivalently described as

A natural extension of to nonlinear data is defined by replacing the Euclidean distance by the Riemannian distance. We define

(2)

and call the Riemannian analogue of the linear subdivision rule , see also [13]. The minimiser of is the Riemannian center of mass of points with respect to weights .

Locally, the minimiser (2) always exists but globally the question is more difficult. On Cartan-Hadamard manifolds, i.e. simply connected, complete Riemannian manifolds with nonpositive sectional curvature, a unique, global minimiser always exists [14, 21, 16].

On positively curved spaces, however, the situation is different. We regard this problem in more detail in the next section. Before, we introduce the contractivity condition and the displacement-safe condition we need in our analysis.

We say that satisfies a contractivity condition with contractivity factor , if

(3)

The subdivision rule is called displacement-safe, if

(4)

for some constant .

We say that converges for input data if the sequence

becomes denser and approaches a continuous limit curve. Formally, we treat convergence in a coordinate chart, linearly interpolating points

by a piecewise linear function with , and observe convergence of functions for . It has been shown in various situations that displacement-safe schemes which admit a contractivity factor are convergent [7, 6, 23, 15]. For the reader’s convenience we repeat the precise statement which can be found for example in [23] and [15].

Theorem 1.

Consider a linear, binary, affine invariant subdivision scheme . Assume that the Riemannian analogue of on a manifold is well-defined. Then, converges to a continuous limit for all input data , if it admits a contractivity factor and is displacement-safe.

3. Riemannian center of mass on manifolds with positive sectional curvature

Before we restrict ourselves to the unit sphere, we discuss the difficulties that arise by studying the Riemannian analogue of a linear subdivision scheme on positively-curved manifolds. Let be a complete, simply connected Riemannian manifold with sectional curvature . Denote by the geodesic ball of radius around where dist again denotes the Riemannian distance.

3.1. Problem setting

To study the convergence of a Riemannian subdivision rule as given in (2) we have to deal with the question if the function

(5)

admits a unique minimiser. Here are fixed points on the manifold and are real coefficients. Later, the points correspond to the input data while the coefficients belong to the mask of a subdivision scheme. Denote by

the sum of the absolute values of the negative coefficients. In contrast to Cartan-Hadamard manifolds, we cannot hope for general global existence and uniqueness of the Riemannian center of mass. To see this, consider the north pole resp. south pole of the sphere and ask for their geodesic midpoint. Clearly, each point on the equator is a suitable choice and thus, a minimiser of . One can show that locally there always exists a unique minimiser while globally there can be infinitely many.

A number of contributions deals with the question of the effect of the sectional curvature, the distances between the points and the choice of the coefficients on the existence of a unique minimiser, see for example [16, 3]. In [3] the authors provide explicit bounds on the input data (depending on the curvature and the chosen coefficients) to ensure the existence and uniqueness of a minimiser of (5) on manifolds with positive sectional curvature. In our setting, Corollary 9 of [3] reads as follows.

Lemma 2 (Dyer et al., [3]).

Let , , for some and . Then, the function has a unique minimiser in , if

  1. , with denoting the injectivity radius of ,

  2. ,

  3. .

A convergence result for nonlinear subdivision schemes depends on the capability to control the distances of points of the sequence from each other as well as their distance to the input data. Unfortunately, Lemma 2 cannot directly be used to control those distances.

Summarising, on manifolds with positive sectional curvature

  • we cannot hope for a convergence result which is valid for all input data.

  • we obtain a local setting in which the Riemannian analogue of a linear subdivision scheme is well defined, see [3].

  • we have to find a strategy to estimate distances between consecutive points of the refined data as well as their distance to the input data.

3.2. The Riemannian analogue of a linear subdivision scheme on the unit sphere

From now on, we restrict ourselves to the unit sphere for . In particular, we have . We provide a setting on the unit sphere in which we can define the Riemannian analogue of a linear subdivision scheme. Choose , , such that for some and . Since the sectional curvature on the unit sphere and the injectivity radius is , Lemma 2 says the following: For input data in the minimiser of is unique and lies in the ball , if

(6)
(7)

In the special case of a scheme with only nonnegative coefficients, i.e., , these conditions reduce to: If , there exists a radius with such that the function has a unique minimiser inside . In our particular setting of subdivision rules, we summarise the results of [3] as follows.

Proposition 3.

Let be the Riemannian analogue of a linear subdivision scheme with mask on the unit sphere . With , we have the two cases:

Case :

is well defined if the input data points contributing to the computation of lie within a ball of radius .

Case :

is well defined if the input data points contributing to the computation of lie within a ball of radius such that there exists an satisfying (6) and (7).

4. A strategy to prove convergence of Riemannian subdivision schemes

We show a strategy to prove convergence results for the Riemannian analogue of a linear scheme on the unit sphere. In order to give bounds for and we join the points involved by curves, and estimate the length of those curves. This requires technical details involving a second order Taylor approximation and estimates for the gradient and the Hessian of squared distance functions on the unit sphere. Throughout this part, the cubic Lane-Riesenfeld rule serves as a main example to illustrate our results. We assume that the considered minima are well defined and unique.

4.1. The Riemannian distance function on the unit sphere

We use explicit formulas for the gradient and the Hessian of the squared distance function computed in [18, Supplement A] as an example of a more general analysis of Hessians of squared distance functions on manifolds. We introduce some notation and state results of [18] which we later use.

Let denote the tangent space at a point . For two points , , , their distance is given by . The exponential map at is given by

The inverse of the exponential map at is well defined everywhere except for the antipodal point of ,

(8)

We will always tacitly assume that means an analytic function which evaluates to for

. For a tangent vector

, denotes the point on which is reached by the geodesic starting in in direction travelling the length of . We can therefore use to switch between and , such that straight lines through the origin in the tangent space are isometrically mapped to geodesics through . Let

be a function on the sphere. Then,

is a representation of with respect to the coordinate chart . Since the first derivative of the exponential map is the identity we have

(9)

As far as the first derivative is concerned, it makes no difference if we consider or . The Hessian can be computed since the function is defined on the linear space . For purposes of this paper, we define the Hessian of by

(10)

For any fixed the gradient of the squared distance function is given by

(11)

To simplify notation we introduce the analytic function

with . Let with and

be the identity matrix. The Hessian of

(in the sense defined above) has been computed in [18] as

(12)

Here , denote the transpose of column vectors resp. . If , we have .

The eigenvalues of the Hessian are

, and .
By linearity we obtain

(13)
(14)

with , .

4.2. Taylor approximation of the squared distance function

The second order Taylor expansion of the squared distance function helps to find an upper bound on the distances between the minimiser of and the input data .

Assume that is the minimiser of . Without loss of generality we choose coordinates such that . Then, the first canonical basis vectors span . Now, we consider the coordinate representation and compute its Hessian. Due to the particular coordinate system the gradient of consists of the first entries of the vector given by (13). The Hessian is given by the submatrix of as in (14) obtained by deleting the last column and row. The second order Taylor approximation of is given by

Differentiation leads to

(15)

Since we are looking for minimisers of the function , the idea is to consider minimisers of instead. If is invertible, then the condition

(16)

is solvable, and assumes a minimum in the point

(17)

This is the unique stationary point of the second order Taylor approximation .

After these preparations, we continue with the distance estimates announced above.

4.3. Variable mask

We introduce a parameter and vary the coefficients of a linear scheme such that they linearly depend on . The idea is to choose such that at time we exactly know the minimiser of , call it the reference point , and at time the minimiser of equals . We always assume

(18)

We will see that the choice of the reference point is crucial for our approach to work and has to be made individually for each scheme. We illustrate the procedure by means of an example.

Example 4 (cubic Lane-Riesenfeld scheme, part I).

We consider the linear cubic Lane-Riesenfeld subdivision rule defined by

for . Since the mask has nonnegative coefficients, Proposition 3 ensures that the Riemannian version of is well defined, if

After contractivity of is shown (see (27)), this bound applies also to the iterates .

We observe that is the geodesic midpoint of and . So, its distance to the input data is bounded by . The more crucial part is to deal with . Consider

(19)

for , , . Without loss of generality we assume that . With , as well as , is the minimiser of . We choose the coefficient functions as

(20)

The minimiser of equals while at time the minimiser of is exactly the point . We continue the analysis in Example 6. ∎

4.4. Estimating the distance to a minimiser

We introduce the curve defined as

(21)

Since connects and we have . The idea is to estimate in order to find an upper bound on the distance between and . If, for example, we choose the reference point to be one of our input data points, this strategy helps us to control the distance between the minimiser of and the initial data. We start with some assumptions and afterwards show what conclusions we can draw from them.

Assumption 1.

Assume that

for some constant . We choose such that the minimiser of is locally well defined for all .

Assumption 2.

Let be as in Assumption 1 and as in (21). Assume that

for some constant .

Assumption 3.

With , as in Assumption 1 resp. 2, assume that the following is true: If for all , then there exists a constant such that for all .

Assumption 1 is necessary for the well-definedness of the Riemannian analogue of a linear scheme. Assumptions 2 and 3 help to estimate the distance between and .

Lemma 5.

Let denote the curve which at time is the minimiser of . If Assumptions 1, 2 and 3 are satisfied for and constants and , then

for all and .

Proof.

Let . Then,

Assume that . Since is continuous, there exists an interval , , with for any . But this is a contradiction to being maximal. ∎

We continue with the analysis of a model subdivision rule.

Example 6 (cubic Lane-Riesenfeld scheme, part II).

We have

Recall that the second equality is based on the assumption . In particular, the inverse exists. By (13) we have

using the geometric fact . We conclude that

Assuming that for some and , the above shows that

This is a first piece of information needed to establish constants , , and eventually prove convergence of the cubic Lane-Riesenfeld scheme. ∎

We are now estimating the norm of for some fixed . The computations are done in the tangent space where for simplicity we always assume that is the north pole . Of course, the coordinates of the ’s change for different , but since we only consider distances which are independent of the chosen coordinate system, we do not indicate the coordinate change in the notation. The Hessian of the squared distance function has the eigenvalues and , see Section 4.1. In particular, we have , , since the radius of the ball containing the input data and the minimiser is smaller than , see Section 3.2. That is why as well as . So, we know that the inverse of the Hessian exists. The location of the minimiser is implicitly defined by , so the derivative is determined by the derivatives , . Therefore, for purpose of computing , we can replace resp. by the 2nd order Taylor expansion . From (17) we get

(22)

where we have used .

The following two lemmas estimate the spectral norm of the inverse of the Hessian and the derivative of the gradient in order to find an upper bound on .

Lemma 7.

Assume that for some and that for and all . Let be constants such that . Then,

with for all .

Proof.

We have

since the maximal eigenvalue of the Hessian of the squared distance function is . In particular, the norm of any eigenvalue of the Hessian is bounded from above by . Furthermore, we see that

(23)

Denote by the smaller eigenvalue of . In fact,

Since and the fact that is positive and monotonically decreasing in we deduce that

By (4.4) we therefore obtain

and the minimal eigenvalue of is bounded from below by . This implies the statement of the lemma. ∎

Lemma 8.

Assume that for some and that for and all . Let be constants such that . Then,

for all .

Proof.

Fix some . Since and, by assumption, we deduce that

So, (13) implies that

We summarise the results of the previous two lemmas in

Proposition 9.

Assume that for some and that for and all . Let be constants such that . Then,

(24)

with for all .

Proof.

By (22) we have

for all . The statement then follows from Lemma 7 and Lemma 8. ∎

We illustrate the results of Proposition 9 by means of our main example.

Example 10 (cubic Lane-Riesenfeld scheme, part III).

Lemma 7 shows that

for all with and . Since this function is strictly increasing in the interval we have

for all . We conclude that

This bound only depends on and . This estimate is needed for the verification of Assumption 3 of our method.

Remember that we have chosen as well as

(25)

Assume that is such that , for . Since Equation (24) reads

(26)

for all . In order to make this bound a proof of Assumption 3, we must do some experimenting. For the sake of demonstration, choose and . Then for any , see Example 6. In particular, Assumption 2 is satisfied for all . Next, compute

Since is positive and monotonically decreasing in , we conclude