Closed-form geodesics and trust-region method to calculate Riemannian logarithms on Stiefel and its quotient manifolds

03/12/2021 ∙ by Du Nguyen, et al. ∙ 0

We provide two closed-form geodesic formulas for a family of metrics on Stiefel manifold, parameterized by two positive numbers, having both the embedded and canonical metrics as special cases. The closed-form formulas allow us to compute geodesics by matrix exponential in reduced dimension for low-rank manifolds. Combining with the use of Fréchet derivatives to compute the gradient of the square Frobenius distance between a geodesic ending point to a given point on the manifold, we show the logarithm map and geodesic distance between two endpoints on the manifold could be computed by minimizing this square distance by a trust-region solver. This leads to a new framework to compute the geodesic distance for manifolds with known geodesic formula but no closed-form logarithm map. We show the approach works well for Stiefel as well as flag manifolds. The logarithm map could be used to compute the Riemannian center of mass for these manifolds equipped with the above metrics. We also deduce simple trigonometric formulas for the Riemannian exponential and logarithm maps on the Grassmann manifold.



There are no comments yet.


This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Stiefel manifold, the manifold of orthogonal matrices of a given number of rows and columns, is important in various applications, including computer vision [1]

, neural networks

[2], statistics [3]. It has two well-known metrics, the embedded metric and canonical metric defined in [4]

, which were treated separately in that foundational paper. While both have known exponential maps, allowing one to compute geodesic ending point when an initial position and velocity is given, they have no known closed-form logarithm map, which gives us a velocity vector with the minimum length that produces a geodesic connecting the initial point with an ending point. To compute statistics on Stiefel manifolds, for example, the Riemannian center of mass

[5], approximated geodesics are often used [3]. There have been several research efforts to compute the logarithm map numerically, with [6, 7] treating the case of the canonical metric while [8, 9] treated the embedded metric. To our best knowledge, a common treatment of the logarithm map for both metrics is not yet available.

Recently, [10] introduced a family of metrics on the Stiefel manifold, parameterized by a real number that contains both the canonical and embedded metrics as special cases, with a closed-form geodesic formula. That geodesic formula requires calculating exponentials of matrices of dimension the longer size of a Stiefel matrix. Later, ignorance of [10], in [11] we introduced essentially the same family of metrics, with a different parameterization. In this paper, we provide two new geodesics formulas for these metrics, generalizing both geodesics formulas in [4]. In particular, the efficient formula for geodesics in Theorem 2.1 of [4] generalizes to the whole family, including the embedded metric. The matrix exponentials required by our formulas depend on the shorter size of a Stiefel matrix, hence are efficient for low-rank matrices. A quick application of these formulas gives us new, trigonometric-like formulas for the exponential map and logarithm map of Grassmann manifolds (considered as a quotient manifold of Stiefel manifold), similar to those of the sphere.

For the Stiefel manifold, our generalized formulas for geodesics imply the search space for the logarithm map between given points could be identified with a Euclidean space with a substantially smaller dimension than the dimension of the manifold when matrices in the manifold are of low rank. We propose an approach to compute the logarithm map by minimizing the square of the Frobenius distance between the ending point of a geodesic to a target point. Thus, we treat the computation of the logarithm map as an optimization problem with efficiently computed gradients. In spirit, our approach is close to the shooting method of [8, 12, 9]. The efficient evaluation of the gradient makes use of Fréchet derivatives, (introduced with efficient computational procedures in [13, 14, 15]), bypassing the requirement for a partition of the time interval. The overall optimization problem is solved using a trust-region solver, with numerical Hessian (an L-BFGS-B solver is also likely to work).

The approach works well for all metrics in the family when the ending point of the geodesic is sufficiently close to the initial point, making this approach competitive for statistical problems, in particular the Riemannian center of mass [5]. For target points further from the initial point, the method is also robust, most of the time it returns a reasonable candidate for the logarithm map, generating a geodesic reaching the target point and has good potential for being length-minimizing. The objective function and its gradient for the canonical metric are simpler than those of other metrics in the family.

The application of Fréchet derivatives to compute the gradient of the distance function could be generalized to other manifolds. To illustrate, we apply it to flag manifolds, introduced in the optimization literature in [16, 17, 18]. From [16]

, a flag manifold could be considered as the manifold of symmetric matrices with specified eigenvalues of given multiplicities (an

isospectral manifold) and could be identified with a quotient space of a Stiefel manifold by a group of block-diagonal orthogonal matrices (the Grassmann manifold is a flag manifold with two eigenvalue blocks). Equipping a flag manifold with the quotient metric of a Stiefel metric in the family above, the flag manifold will have known geodesics but except for the Grassmann case, has no known logarithm map. Studying geodesics on a flag manifold is equivalent to studying geodesics on a Stiefel manifold, where the initial tangent direction satisfies a horizontal condition, while the ending point is required to reach not a target point but an element in its orbit. Our approach, minimizing the Euclidean distance in the symmetric matrix realization, with the help of Fréchet derivatives, also works well in this case. For Grassmann manifolds, our numerical algorithm for the logarithm map matches the closed-form formula. For other flag manifolds, it also returns a reasonable candidate.

We will review basic notations in section 2, prove the geodesic formulas for Stiefel manifolds in section 3, for Grassmann manifolds in section 4. We review Fréchet derivatives in section 5 and present our algorithm for the logarithm map for Stiefel manifolds in section 6. We review flag manifolds and report the results in section 7. We finish with a calculation of the Riemannian center of mass, some notes on implementation, and a final discussion.

2 Notations and background

Most of the results here are valid for both base fields and , thus we denote to be either field if the results hold for both. We will denote the space of matrices of -rows and -columns in . We will denote the transpose to be the real transpose if the base field is real, and the hermitian transpose if the base field is complex. If the base field is , the notation denotes the usual trace of a matrix . If the base field is , denotes . The (real) inner product on a matrix space is if the base field is real, and if the base field is complex. We denote by the usual Euclidean norm. We call a matrix -antisymmetric if , -symmetric if . Define to be the space of all -antisymmetric matrices, to be the anti-symmetrize operator. We denote the -orthogonal group to be the group of matrices in such that , thus is , the real orthogonal group and is the unitary group .

On a Riemannian manifold , a geodesic has the distance minimizing property and satisfies the geodesic equation of the form , where could be defined locally (by Christoffel symbols) or globally (by a Christoffel function, see [4]). If is a point on and is a tangent vector at , if is a geodesic on with and , we denote by . Its inverse denotes a vector of minimal length such that . In general, may not exist, and may not be unique. However, as Stiefel and its quotient manifolds are compact, they are complete, and there is at least one geodesic connecting two points (the Hopf-Rinow theorem [19], Theorem 16.17). We use the notation to make explicit the manifold under consideration. Thus and denote the exponential maps on the Stiefel, Grassmann, and flag manifolds (see below). Similarly, we use the notation for the logarithm map.

We will denote by the Stiefel manifold over the field , which consists of matrices in satisfying the relation . For , is , but we will focus on the component . The corresponding Grassmann manifold is denoted by . Flag manifolds will be reviewed in section 7. We use the notation to denote the equivalent class of (usually an element of the Stiefel manifold) in the quotient structure on the Grassmann or flag manifolds.

Fréchet derivatives, the directional derivatives of a matrix function at point in direction will be denoted by . From the works [14, 15] and especially [13], it could be computed very effectively. From our perspective, a calculation involving Fréchet derivatives is as effective computationally as a closed-form solution, thus we can consider the gradient of the objective function discussed here computable in closed-form.

We will denote by the entire function corresponding to , and the entire function corresponding to . These entire functions exist, as seen from their Taylor series expansions. They will be used to give analytic formulas for geodesics on Grassmann manifolds. We will denote by the matrix when .

3 Geodesics for Stiefel manifolds

As noted, we will be working with matrix manifolds over a base field or . Let be the Stiefel manifold of matrices satisfying , let be two positive real numbers and . We define the operator-valued function


This gives us an inner product for in the ambient space , parametrized by . A tangent vector at could be considered as a matrix such that is -antisymmetric. The inner product induces a Riemannian metric on . The case corresponds to the embedded metric, the case corresponds to the canonical metric, both defined in [4] for . Matrices of the form and

are eigenvectors of

with eigenvalues and , respectively. Thus is positive definite. The paper [4] provided closed-form geodesics for the embedded and canonical metrics in separate formats. We generalize both results below:

Theorem 1

Let be a tangent vector to at a point . Let . Let be an orthogonal basis of the column span of . Expressing in this basis. Let and , the geodesic equation for the metric in eq. 3.1 is


Each equation below describes the geodesic with :



The metric defined in equations (43) of [10] corresponds to , where defined in that paper corresponds to in our notations. Since geodesics are unchanged by metric scaling, eq. 3.2 follows from the geodesic equation (64) of [10]. (The proof in [10] extends to the complex case). We provided an alternative derivation in [11].

Section 2.2.2 of [4] shows a derivation of eq. 3.3 for the case (on ideas of Ross Lippert). We show it extends with little change for all . From , we have , from here . Following [4], put (we should use a different symbol for to be clear, but we will show it is constant shortly), and , hence . Then, expands to (dropping )

where we have used the geodesic equation to expand , and use to reduce to zero in the last equality. Thus, is constant, and by the initial condition is -antisymmetric, . Next, expand . We simplify using eq. 3.2:

Thus , where we have used the fact that . Therefore . Equation 3.2 becomes:


The last equation implies:

From here, eq. 3.3 follows. Theorem 2.1 of [4] is a special case of eq. 3.4 for . To prove it, set , then take time derivatives:

With , the left-hand-side of eq. 3.5 becomes:

We collect the coefficients for as and for as , hence the equation for is

From eq. 3.3, for some function , so

By linearity, both and satisfy the same equation as . The initial conditions are , so eq. 3.4 gives us the solution. If is of full rank, the thin decomposition satisfies the condition of the theorem as . In the degenerate case, we use an SVD decomposition to determine the rank, and has fewer columns than does.    

In our notations, the geodesic formula in equation (75) of [10] could be written as


Without the help of the geodesic equation, it is not apparent that this formula is equivalent to eq. 3.3 and eq. 3.4. As noted, it requires computing exponentials of two matrices.

4 Grassmann manifolds and geodesics

On the unit sphere , the exponential map at is given by the simple formula (which reflects the fact that the angle between and is ). This formula could be written as , with and , understood as entire functions as mentioned in section 2. This formula implies , from here the logarithm map for the sphere is given by .

As a corollary of theorem 1, we provide similar formulas for horizontal geodesics and logarithm on the Grassmann manifold , considered as a quotient of . For other approaches to Grassmann geodesics, see [20, 21].

Recall from [4], the Grassmann manifold could be considered as a quotient of the Stiefel manifold under the action of the -orthogonal group by right multiplication for . We represent the equivalent class of by , and the image of a tangent vector under this quotient as . A tangent vector on the Stiefel manifold is horizontal with respect to this action if . Since and are entire functions, and are defined for any square matrix . Also, if we define to be at , we can add to its domain to get a function on the interval , which is right-continuous at . With this convention, we can define for a diagonal matrix with diagonal entries in .

Proposition 1

Let and be the analytic continuations of and . At a point on the Stiefel manifold , for a tangent vector such that , the Stiefel exponential map is given by


The class represents the Grassmann exponential . Conversely, for and in , let be an SVD decomposition of . Set


then is a horizontal vector on the Stiefel manifold and . Thus, and represent the same element in the Grassmann manifold . The length of is , which is the geodesic distance between the equivalent classes and of and , hence is the lift of a geodesic logarithm of to .


For a horizontal vector, , and expand in eq. 3.3

where , which gives us eq. 4.1. Now, assuming is horizontal such that with , ( connects with an element in the orbit of ). Multiply both sides of eq. 4.1 by and use the horizontal condition, we have , thus is a polar decomposition. This implies if is an SVD decomposition. From here, eigenvalues of must be diagonal entries of or , plus multiples of , with is the smallest possible choice. Hence, is a lower-bound of geodesic lengths which we will show is attainable. The analysis also shows if is of full rank, the polar decomposition is unique, thus both the alignment matrix and are unique. In that case, if the minimal length is attainable, all entries of are in , so is invertible and we can solve for at most one from eq. 4.1, thus, length-minimizing geodesics between and , if exist, must be unique. If an eigenvalue of is we may have more than one length-minimizing geodesics.

Let us prove eq. 4.2 gives us a length minimizing geodesic. Direct substitution shows . Since is diagonal, elementwise functions of commute, together with , eq. 4.2 implies

Using , hence , the second term is

Thus , so and . Substitute these expressions to eq. 4.1 to evaluate:

The last expression is reduced to . Thus , representing the same Grassmann element as . The length of is .    

We note for the case , is the real sphere, the Grassmann manifold is the projective space, . The geodesic above connects to . In this case, eq. 4.2 is , versus the logarithm map of the sphere . Numerically, to evaluate eq. 4.1 we can evaluate the entire functions on the eigenvalues of the symmetric matrix . To evaluate eq. 4.2 we only need one SVD decomposition of . We benefited from reading [21].

5 Fréchet derivatives

Recall [13, 14, 15] if is a power series with scalar coefficient and is a square matrix, then the Fréchet derivative in direction could be expressed as

under standard convergence conditions, thus for entire functions it always exists. It is known that and could be computed together with a computational complexity of around three times the complexity of

. There exist routines to compute Fréchet derivatives of the exponential function in open source or commercial package, for example, the function

expm_frechet in SciPy [22]. To explain why we prefer Fréchet derivatives, consider the simple case of the directional derivative of the exponential function. For a matrix function , the well-known formula (Section 3.2 of [19]) implies, for , (where ). It is difficult to evaluate this formula directly. On the other hand, the library function expm_frechet is effective computationally. The computational cost will be , where is the size of (it is also dependent on the logarithm of the eigenvalues of ). In the cases of symmetric and antisymmetric matrices, an eigenvalue decomposition or a Schur decomposition may improve the sparsity of the operations.

The following lemma (which is likely known, but we could not find a reference) shows Fréchet derivatives are useful to compute matrix gradient of power series,

Lemma 1

Let be an analytic function near and is well-formed for some matrices . Then



From the analytic assumption we only need to prove this for . This follows from .    

6 Computing the logarithm map for the Stiefel manifold

6.1 Main algorithm

We now turn to the problem of computing the logarithm map on the Stiefel manifold. From this section on we will work with the base field . We expect the complex case could be tackled similarly. We benefited from reading [7, 6, 8] while preparing this section.

For two elements , our approach is to solve eq. 3.4 for the tangent vector , expressed as in terms of the matrices and . Let . Equation (3.4) could be written in the following form


We see is in the column span of and . Here, is the number of columns of . On the other hand, for any with compatible dimension such that with and for an integer , then is a tangent vector and eq. 6.1 describes a geodesic with . We note the length of is .

We now show that, given and , if is such that is an orthogonal basis of the column span of and , then there is a geodesic connecting and such that is in the span of . If is in the intersection of the span of and , then for . Next, implies , thus, this intersection is itself a Stiefel manifold. The tangent vectors of this small Stiefel manifold are of the form , with is antisymmetric, and the induced metric is . Since the intersection Stiefel manifold is complete, there is a geodesic in it, in the induced metric, connecting and . From the expression of the induced metric, the geodesic in the small manifold is a geodesic in the full Stiefel manifold.

As the exponential map is injective near , for close enough to , the above analysis shows we can assume is a complement basis of in the column span of and . Therefore, we will fix an orthogonal basis of the column span of . We solve for and by minimizing the function with given by eq. 3.4, and is the Frobenius norm. Since the manifold is complete, there is always a geodesic connecting and , thus a minimum always exists with (global) minimal value . Our algorithm will be local, so it may return a value greater than , and even if the minimum value is , the geodesic found may not be of minimal length. However, if is close enough to (within its injectivity radius), the Riemannian logarithm is unique, and the from this approach gives us the logarithm map and geodesic distance. As we will see in section 6.2, this objective function works quite adequately for our purpose. The main idea is the gradient of could be computed by Fréchet derivatives.

Theorem 2

Let and be two points on the Stiefel manifold and is an orthogonal basis of the column span of ( is its rank). For , let be the geodesic starting at with as in theorem 1. Define the function . With , , , we have


Set , where the block structure of is defined mirroring that of , then the gradient of is given by