## 1. Introduction

Bayesian inference is hard. Bayesian inference on non-Euclidean manifolds is harder. Prior to the publication of byrne2013geodesic, a statistician required great ingenuity to compute the posterior distribution for any model with non-Euclidean parameter space, and the algorithmic details might change significantly depending on the prior, the likelihood, and the constraints implied by the non-Euclidean geometry. A good example of this approach is found in hoff2009simulation, where the posterior distribution over the Stiefel manifold of orthonormal matrices is computed by way of column-at-a-time Gibbs updates that rely on model specifications.

It is preferable, rather, that the same algorithm work for many different kinds of models. This is one of the strengths of Hamiltonian Monte Carlo duane1987hybrid and its Riemannian extension, RMHMC girolami2011riemann, which augments the posterior distribution by the random Gaussian momentum , where

is the metric tensor pertaining to the Riemannian manifold over which the model is defined. RMHMC simulates from the posterior distribution by simulating the augmented canonical distribution with Hamiltonian

(1.1) |

i.e., is the negative log-posterior and

is the negative logarithm of the probability density function of Gaussian momentum

. Since the kinetic energy is not separable in and , the system is not integrable using Euler’s method, so, in most cases, implicit integration methods are required girolami2011riemann. However, byrne2013geodesic point out that, for certain manifolds with known geodesics, it is beneficial to split the Hamiltonian into two parts and simulate the two systems iteratively. Here, the first Hamiltonian renders the equationsand, crucially, the second Hamiltonian renders the geodesic dynamics for the Riemannian metric’s Levi-Civita connection. Thus, the entire system may be simulated by iterating between perturbing the momentum and advancing along the manifold geodesics.

## 2. gMC on embedded manifolds

byrne2013geodesic extends the RMHMC formalism to posterior inference on manifolds embedded in Euclidean space. In the following, this extension is referred to as the embedding geodesic Monte Carlo (egMC). To maintain the RMHMC formalism, the authors begin by considering the inference problem on the *intrinsic* manifold, where the Hausdorff measure

and not the Lebesgue measure , is the base measure with respect to which the posterior distribution is defined^{1}^{1}1Whereas the ensuing derivation is extremely clever, it is unfortunate that it relies on an intrinsic conception of the inference problem, which, we will argue, causes confusion when the object of interest is *a priori* defined using the Euclidean embedding coordinates.. Here, the RMHMC Hamiltonian (1.1) may be written

for

the log-posterior with respect to the Hausdorff base measure. Now, a clever change of variables occurs using an *isometric embedding* as a tool. An isometric embedding of a manifold into Euclidean space is a map satisfying

for the Jacobian of the map evaluated at . byrne2013geodesic use the isometric embedding to make gMC practical on certain manifolds. This is accomplished by the change of variables , with

If , then the Hamiltonian becomes ([byrne2013geodesic], Equation (9))

(2.1) | ||||

for the projection matrix of the tangent space of the embedded manifold (at point ) conceived of as a subspace of the ambient Euclidean space. The authors point out that “the target density is still defined with respect to the Hausdorff measure of the manifold, and so no additional log-Jacobian term is introduced,” and invite the reader to

[n]ote that by working entirely in the embedded space, we completely avoid the coordinate system and the related problems where no single global coordinate system exists. The Riemannian metric only appears in the Jacobian determinant term of the density: in certain examples, this can also be removed, for example by specifying the prior distribution as uniform with respect to the Hausdorff measure…

But it is not immediately clear how one should approach the common scenario where the prior is defined *a priori* using the embedding coordinates, i.e. those of the ambient Euclidean space. On the sphere, for example, such priors include the Von Mises-Fisher distribution. On the Stiefel manifold, such priors include the matrix Bingham-Von Mises-Fisher distribution hoff2009simulation. Contrary to the above statement, one suspects that the log-Jacobian term should never be necessary, and this turns out to be the case.

## 3. Alternative derivation I

Let denote a target posterior density defined directly using embedding coordinates. For the unit sphere, this means that ; for the Stiefel manifold of orthonormal matrices, this means that , for

the identity matrix of the given dimension

. Let be the the orthogonal projection onto the tangent space of the embedded manifold at point . For example, for the sphere, this projection is given byfor the Stiefel manifold, the matrix is (see Appendix B)

for the Kronecker product and the permutation matrix for which for any matrix . For simplicity, we take the sphere as our prime example and leave the Stiefel manifold case for the appendix.

Let momentum

follow a degenerate Gaussian distribution on the tangent space to the sphere at

, i.e. , where is some positive semi-definite matrix. Then at any point , the density of is proportional towhere is the pseudo determinant and is the pseudo inverse of matrix . Then the Hamiltonian is given by

(3.1) |

for any pair and . Similar to the original gMC algorithm, we split into two Hamiltonians

and

Using some matrix calculus (Appendix C) and the fact that holbrook2018differentiating, the first system gives the equations

(3.2) | ||||

Since the gradient does not necessarily belong to the tangent space, we perform the change of variables . The equations now read

(3.3) | ||||

Velocity stays on the tangent space at because in general. The second system may also be rewritten:

where . The system corresponding to is solved by the geodesic with initial conditions . Thus the system corresponding to may be integrated by iteratively advancing according to (3.3) and spherical geodesics, alternating between and between steps. The general algorithm is given in Appendix A.1.

Accounting for the deterministic maps and within the accept/reject step yields a surprisingly simple acceptance probability. For the trajectory beginning at point , is mapped to before the geodesic flow, but is mapped to afterward, where we have used the shorthand . But before the next geodesic flow, we apply the inverse map . In this way, all internal deterministic maps cancel out, and one must only account for the first and last. Thus, for a trajectory consisting of steps, the Jacobian correction is

and the resulting log acceptance probability is the minimum of 0 and

(3.4) | ||||

See Appendix A.1 for details.

## 4. Alternative derivation II

But why begin with the momentum at all? By beginning with velocity, one may derive yet another class of algorithms that nonetheless reduces to the original geodesic Monte Carlo algorithm. The following approach is similar to that of holbrook2017geodesic and is related to the Lagrangian formulation found in lan2015markov. We let the velocity have the same distribution as before, i.e. , and define the non-canonical (cf. beskos2011hybrid) Hamiltonian

Note that the sign of the log pseudo determinant differs from that from Equation (3.1), but the quadratic terms are equal. Again, split the Hamiltonian in two:

The first yields the equations

where the only difference with Equation (3.3) is the sign of . The second Hamiltonian is handled in the exact same way as above. Map , advance along the geodesics, and map back to . As above, the same Jacobian correction appears in the accept/reject step, and this time the log acceptance probability simplifies even further (see Appendix A.2) to

(4.1) |

i.e., the log pseudo determinants cancel. See Appendix A.2 for algorithmic details.

## 5. Obtaining the original algorithm

For both alternative derivations, the formulas greatly simplify when is the identity matrix, and the original geodesic Monte Carlo algorithm is obtained. Because the pseudo determinant of a projection matrix is unity, the Hamiltonians reduce to

The simplified Hamiltonian is the same as Formula (2.1), but with replacing , the posterior with respect to the Hausdorff measure. As established above, the two are equivalent, but by working completely with embedding coordinates, we are able to avoid any notion of intrinsic geometry whatsoever and thus require less mathematical machinery.

Similarly, there is no need for the Jacobian correction within the accept/reject step. Concretely, this is because . Theoretically, this is because the geodesic Monte Carlo algorithm is not symplectic for general but is symplectic for the identity. Finally, the two derivations may be viewed as constructing random walks on the cotangent and tangent bundles, respectively. The upshot is that the original geodesic Monte Carlo algorithm may be interpreted either way.

## 6. Discussion

We have proposed two alternative derivations of the geodesic Monte Carlo for embedded manifolds byrne2013geodesic. These derivations are conceptually simpler, as they do not rely on a notion of intrinsic manifold geometry. They clarify the original algorithm by showing that the inclusion of the log-Jacobian of the embedding in the Hamiltonian is unnecessary in any case where the target distribution is defined using embedding coordinates. This claim goes beyond the statement of the original paper.

Here, the original geodesic Monte Carlo algorithm was presented as a special case of two general classes of algorithms with non-trivial mass matrices. As a result, the new derivations emphasized the role played by the degenerate Gaussian distribution. Finally, the exposition hinted how Metropolis adjustments may be incorporated into geometric Langevin algorithms such as leimkuhler2016efficient.

## Appendix A Acceptance probabilities and generalized algorithms

### a.1. First alternative derivation

Let be the trajectory’s starting position and be its end point. Also let and denote the energies at the beginning and end of the trajectory, respectively. Then the log acceptance probability is min, where

In the final line, and denote the terms collected into those featuring the initial and final positions, respectively.

### a.2. Second alternative derivation

Again let be the trajectory’s starting position and be its end point. Let and denote the energies at the beginning and end of the trajectory, respectively. Then the log acceptance probability is min, where

In the final line, and denote the terms collected into those featuring the initial and final positions, respectively.

## Appendix B Projection matrix for the Stiefel manifold

When modeling an element of the Stiefel manifold, for momentum matrix we write the degenerate Gaussian distribution

and are matrices. To get the form for , we note that the orthogonal projection of a matrix onto the tangent space at is

Applying the vec operator gives

Hence

## Appendix C Deriving the first system of equations

To obtain Equation (3.2), we need to calculate

This may be done using the differential and Theorem 2.20 from holbrook2018differentiating, namely

Thus

is given by

so we have

Distributing the leading and rearranging terms gives

but the first term of the inner parenthesis is equal to zero because and so

Hence,

and it follows immediately that

Comments

There are no comments yet.