Probabilistic Solutions to Differential Equations and their Application to Riemannian Statistics

06/03/2013 ∙ by Philipp Hennig, et al. ∙ 0

We study a probabilistic numerical method for the solution of both boundary and initial value problems that returns a joint Gaussian process posterior over the solution. Such methods have concrete value in the statistics on Riemannian manifolds, where non-analytic ordinary differential equations are involved in virtually all computations. The probabilistic formulation permits marginalising the uncertainty of the numerical solution such that statistics are less sensitive to inaccuracies. This leads to new Riemannian algorithms for mean value computations and principal geodesic analysis. Marginalisation also means results can be less precise than point estimates, enabling a noticeable speed-up over the state of the art. Our approach is an argument for a wider point that uncertainty caused by numerical calculations should be tracked throughout the pipeline of machine learning algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Systems of ordinary differential equations (ODEs), defined here as equations of the form

(1)

with and are essential tools of numerical mathematics. In machine learning and statistics, they play a role in the prediction of future states in dynamical settings, but another area where they have a particularly central role are statistical computations on Riemannian manifolds. On such manifolds, the distance between two points is defined as the length of the shortest path connecting them, and calculating this path requires solving ordinary differential equations. Distances are elementary, so virtually all computations in Riemannian statistics involve numerical ODE solvers (Sec. 2). Just calculating the mean of a dataset of points, trivial in Euclidean space, requires optimisation of geodesics, thus the repeated solution of ODEs. Errors introduced by this heavy use of numerical machinery are structured and biased, but hard to quantify, mostly because numerical solvers do not return sufficiently structured error estimates.

The problem of solving ordinary differential equations is very well studied (Hairer et al. (1987, §1) give a historical overview), and modern ODE solvers are highly evolved. But at their core they are still based on the seminal work of Runge (1895) and Kutta (1901), a carefully designed process of iterative extrapolation. In fact, ODE solvers, like other numerical methods, can be interpreted as inference algorithms (Diaconis, 1988; O’Hagan, 1992), in that they estimate an intractable quantity (the ODE’s solution) given “observations” of an accessible quantity (values of ). Like all estimators, these methods are subject to an error between estimate and truth. Modern solvers can estimate this error (Hairer et al., 1987, §II.3 & §II.4), but generally only as a numerical bound at the end of each local step, not in a functional form over the space of solutions. In this paper we study and extend an idea originally proposed by John Skilling (1991) to construct (Sec. 3

) a class of probabilistic solvers for both boundary value problems (BVPs) and initial value problems (IVPs). Our version of Skilling’s algorithm returns nonparametric posterior probability measures over the space of solutions of the ODE. This kind of structured uncertainty improves Riemannian statistics, where the solution of the ODE is a latent “nuisance” variable. As elsewhere, it is beneficial to replace point estimates over such variables with probability distributions and marginalise over them (Sec. 

4). A second, remarkable result is that, although the probabilistic solver presented here provides additional functionality, when applied to concrete problems in Riemannian statistics, it can actually be faster than standard boundary value problem solvers (as implemented in Matlab). One reason is that the answers returned by the probabilistic solver can be less precise, because small errors are smoothed out when marginalised over the posterior uncertainty. As the cost of Riemannian statistics is dominated by that of solving ODEs, time saved by being “explicitly sloppy” in the probabilistic sense is of great value. Sec. 5 offers some empirical evaluations. A Matlab implementation of the solver is published alongside this manuscript111http://probabilistic-numerics.org/ODEs.html.

The focus of this text is on the concrete case of Riemannian statistics and ODEs. But we believe it to be a good example for the value of probabilistic numerical methods to machine learning and statistics more generally. Treating numerical algorithms as black boxes ignores their role as error sources and can cause unnecessary computational complexity. Our results show that algorithms designed with probabilistic calculations in mind can offer qualitative and quantitative benefits. In particular, they can propagate uncertainty through a numerical computation, providing a joint approximate distribution between input and output of the computation.

2 Riemannian Statistics

Statistics on Riemannian manifolds is becoming increasingly acknowledged as it allows for statistics in constrained spaces, and in spaces with a smoothly changing metric. Examples include analysis of shape (Freifeld and Black, 2012; Fletcher et al., 2004), human poses (Hauberg et al., 2012b) and image features (Porikli et al., 2006). Here we focus on Riemannian statistics in metric learning, where a smoothly changing metric locally adapts to the data (Hauberg et al., 2012a).

Let

denote a positive definite, smoothly changing, metric tensor function at

. It locally defines a norm . This norm in turn defines (Berger, 2007) the length of a curve as . For two points , , the shortest connecting curve, is known as a geodesic, and defines the distance between and on the manifold. Using the Euler-Lagrange equation, it can be shown (Hauberg et al., 2012a) that geodesics must satisfy the system of homogenous second-order boundary value problems

(2)

where the notation

indicates the vectorization operation returning a vector

, formed by stacking the elements of the matrix row by row, and is the Kronecker product, which is associated with the vectorization operation (van Loan, 2000). The short-hand is used throughout the paper. For simple manifolds this BVP can be solved analytically, but most models of interest require numerical solutions.

The geodesic starting at with initial direction is given by the initial value problem

(3)

By the Picard–Lindelöf theorem (Tenenbaum and Pollard, 1963), this curve exists and is locally unique. As is the derivative of a curve on the manifold at , it belongs to the tangent space of the manifold at . Eq. (3) implies that geodesics can be fully described by their tangent at . This observation is a corner-stone of statistical operations on Riemannian manifolds, which typically start with a point (often the mean), compute geodesics from there to all data points, and then perform Euclidean statistics on the tangents at . This implicitly gives statistics of the geodesics.

These concepts are formalised by the exponential and logarithm maps, connecting the tangent space and the manifold. The exponential map returns the end-point of the geodesic satisfying Eq. (3) such that . The logarithm map is the inverse of the exponential map. It returns the initial direction of the geodesic from to , normalised to the length of the geodesic, i.e.

(4)

The following paragraphs give examples for how this theory is used in statistics, and how the use of numerical solvers in these problems affect the results and their utility, thus motivating the use of a probabilistic solver.

Means on Manifolds

The most basic statistic of interest is the mean, which is defined as the minimiser of the sum of squared distances on the manifold. A global minimiser is called the Frechét mean, while a local minimiser is called a Karcher mean (Pennec, 1999). In general, closed-form solutions do not exist and gradient descent is applied; the update rule of with learning rate is (Pennec, 1999)

(5)

One practical issue is that, in each iteration, the exponential and logarithm maps need to be computed numerically. Each evaluation of Eq. (5) requires the solution of BVPs and one IVP. Practitioners thus perform only a few iterations rather than run gradient descent to convergence. Intuitively, it is clear that these initial steps of gradient descent do not require high numerical precision. The solver should only control their error sufficiently to find a rough direction of descent.

Principal Geodesic Analysis

A generalisation of principal component analysis (PCA) to Riemannian manifolds is

principal geodesic analysis (PGA) (Fletcher et al., 2004). It performs ordinary PCA in the tangent space at the mean, i.e. the principal directions

are found as the eigenvectors of

, where is the mean as computed by Eq. (5). The principal direction is then mapped back to the manifold to form principal geodesics via the exponential map, i.e. .

Again, this involves solving for geodesics: one from each to the mean. As the statistics are performed on the (notoriously unstable) derivatives of the numerical solutions, here, high-accuracy solvers are required. These computational demands only allow for low-sample applications of PGA.

Applications in Metric Learning

The original appeal of Riemannian statistics is that many types of data naturally live on known Riemannian manifolds (Freifeld and Black, 2012; Fletcher et al., 2004; Porikli et al., 2006), for example certain Lie groups. But a recent generalisation is to estimate a smoothly varying metric tensor from the data, using metric learning, and perform Riemannian statistics on the manifold implied by the smooth metric (Hauberg et al., 2012a). The idea is to regress a smoothly changing metric tensor from learned metrics as

(6)
(7)

As above, geodesics on this manifold require numeric solutions; hence this approach is currently only applicable to fairly small problems. The point estimates returned by numerical solvers, and their difficult to control computational cost, are both a conceptual and computational roadblock for the practical use of Riemannian statistics. A method returning explicit probabilistic error estimates, at low cost, is needed.

3 Solving ODEs with Gaussian process priors

Writing in 1991, John Skilling proposed treating initial value problems as a recursive Gaussian regression problem: Given a Gaussian estimate for the solution of the ODE, one can evaluate a linearization of , thus construct a Gaussian uncertain evaluation of the derivative of , and use it to update the belief on

. The derivations below start with a compact introduction to this idea in a contemporary notation using Gaussian process priors, which is sufficiently general to be applicable to both boundary and initial values. Extending Skilling’s method, we introduce a way to iteratively refine the estimates of the method similar to implicit Runge-Kutta methods, and a method to infer the hyperparameters of the algorithm.

The upshot of these derivations is a method returning a nonparametric posterior over the solution of both boundary and initial value problems. Besides this more expressive posterior uncertainty, another functional improvement over contemporary solvers is that these kind of algorithms allow for Gaussian uncertainty on the boundary or initial values, an essential building block of probabilistic computations.

In parallel work, Chkrebtii et al. (2013) have studied several theoretical aspects of Skilling’s method thoroughly. Among other things, they provide a proof of consistency of the probabilistic estimator and argue for the uniqueness of Gaussian priors on ODE and PDE solutions. Here, we focus on the use of such probabilistic numerical methods within a larger (statistical) computational pipeline, to demonstrate the strengths of this approach, argue for its use, and highlight open research questions. We also offer a minor theoretical observation in Section 3.5, to highlight a connection between Skilling’s approach and the classic Runge-Kutta framework that has previously been overlooked.

Conceptually, the algorithm derived in the following applies to the general case of Eq. (32). For concreteness and simplicity of notation, though, we focus on the case of a second-order ODE. All experiments will be performed in the Riemannian context, where the object to be inferred is the geodesic curve consistent with Eqs. (33) or (3): Distance calculations between two points are boundary value problems of the form , . Shifts of variables along geodesics are initial value problems of the form , where is the length of the geodesic. The probabilistically uncertain versions of these are

(8)
(9)

with positive semidefinite covariance matrices .

3.1 Prior

We assign a joint Gaussian process prior over the elements of the geodesic, with a covariance function factorizing between its input and output dependence ( is the -th element of ):

(10)

with a positive semidefinite matrix – the covariance between output dimensions, and a positive semidefinite kernel defining the covariance between scalar locations along the curve. The geodesic problem only involves values , but this restriction is not necessary in general. The mean functions have only minor effects on the algorithm. For BVPs and IVPs, we choose linear functions and , respectively. There is a wide space of covariance functions available for Gaussian process regression (Rasmussen and Williams, 2006, §4). In principle, every smooth covariance function can be used for the following derivations (more precisely, see Eq. (36

) below). For our implementation we use the popular square-exponential (aka. radial basis function, Gaussian) kernel

. This amounts to the prior assumption that geodesics are smooth curves varying on a length scale of along , with output covariance . Sec. 3.4 below explains how to choose these hyperparameters.

Gaussian processes are closed under linear transformations. For any linear operator

, if then . In particular, a Gaussian process prior on the geodesics thus also implies a Gaussian process belief over any derivative (see Rasmussen and Williams (2006, §9.4)), with mean function and covariance function

(11)

provided the necessary derivatives of exist, as they do for the square-exponential kernel . Explicit algebraic forms for the square-exponential kernel’s case can be found in the supplements.

The boundary or initial values or , can be incorporated into the Gaussian process belief over analytically via the factors defined in Eqs. (38) and (9). For the case of boundary value problems, this gives a Gaussian process conditional over with mean and covariance functions (c.f. supplements)

(12)
(13)

For IVPs, have to be replaced with , with and with .

3.2 Likelihood

At this point we pick up Skilling’s idea: Select a sequence of representer points. The algorithm now moves iteratively from to . At point , construct estimates ] for the solution of the ODE and its derivative. The mean (and simultaneously the most probable) estimate under the Gaussian process belief on is the mean function, so we choose , . This estimate can be used to construct an “observation” of the curvature of along the solution. For our application we use the defined in Eq. (33). This is homogeneous, so does not feature as a direct input to , but other ’s, including inhomogeneous ones, can be treated in the exact same way. The idea of constructing such approximately correct evaluations of along the solution is also at the heart of classic ODE solvers such as the Runge-Kutta family (see Section 3.5

). Skilling’s idea, however, is to probabilistically describe the error of this estimation by a Gaussian distribution: Assume that we have access to upper bounds

on the absolute values of the elements of the Jacobians of with respect to and :

(14)

for all . In the specific case of the Riemannian problem on learned metrics, these bounds arise from analysis of Eq. (7). The uncertainty on around , is Gaussian, with covariance (see also supplements)

(15)

Considering this prior uncertainty on to first order, the error caused by the estimation of used to construct the observation is bounded by a zero-mean Gaussian, with covariance (in )

(16)

This uncertain observation of is included into the belief over as the likelihood factor

(17)

The algorithm now moves to and repeats.

Figure 1: Convergence of probabilistic ODE solver as number of evaluation points increases. Each panel shows a dataset (grey points) whose distribution determines a Riemannian manifold. The algorithm infers the geodesic connecting two points (black) on this manifold. Mean of inferred Gaussian process posterior in red. Evaluation points of the ODE in orange. Green lines are samples drawn from the posterior belief over geodesics. Left-most panel: no ODE evaluations (only boundary values). The supplements contain an animated version of this figure.

3.3 Posterior

After steps, the algorithm has collected evaluations , and constructed equally many uncertainty estimates . The marginal distribution over the under the prior (35) is Gaussian with mean and covariance

(18)

The matrix is a rearranged form of the block diagonal matrix whose -th block diagonal element is the matrix . The posterior distribution over , and the output of the algorithm, is thus a Gaussian process with mean function

(19)

(where , etc.) and covariance function (see also supplements)

(20)

Fig. 1

shows conceptual sketches. Crucially, this posterior is a joint distribution over the entire curve, not just about its value at a particular value along the curve. This sets it apart from the error bounds returned by classic ODE solvers, and means it can be used directly for Riemannian statistics (Sec. 

4).

3.3.1 Iterative Refinements

We have found empirically that the estimates of the method can be meaningfully (albeit not drastically) improved by a small number of iterations over the estimates after the first round, without changing the Gram matrix . In other words, we update the mean estimates used to construct based on the posterior arising from later evaluations. This idea is related to the idea behind implicit Runge-Kutta methods (Hairer et al., 1987, §II.7). It is important not to update based on the posterior uncertainty in this process; this would lead to strong over-fitting, because it ignores the recursive role earlier evaluation play in the construction of later estimates.

3.4 Choice of Hyperparameters and Evaluation Points

The prior definitions of Eq. (35) rely on two hyperparameters and . The output covariance defines the overall scale on which values are expected to vary. For curves within a dataset , we assume that the curves are typically limited to the Euclidean domain covered by the data, and that geodesics connecting points close within the dataset typically have smaller absolute values than those geodesics connecting points far apart within the dataset. For curves between , we thus set in an empirical Bayesian fashion to , where is the sample covariance of the dataset . For IVPs, we replace in this expression by .

The width of the covariance function controls the regularity of the inferred curve. Values give rise to rough estimates associated with high posterior uncertainty, while gives regular, almost-straight curves with relatively low posterior uncertainty. Following a widely adapted approach in Gaussian regression (Rasmussen and Williams, 2006, §5.2), we choose to maximise its marginal likelihood under the observations . This marginal is Gaussian. Using the Gram matrix from Eq. (26), its logarithm is

(21)

A final design choice is the location of the evaluation points . The optimal choice of the grid points depends on and is computationally prohibitive, so we use a pre-determined grid. For BVP’s, we use sigmoid grids such as as they emphasise boundary areas; for IVP’s we use linear grids. But alternative choices give acceptably good results as well.

Figure 2: Solving five BVPs within a dataset (grey points). Orange: solutions found by Matlab’s bvp5c

. For the probabilistic solver, 30 posterior samples (thin green), and mean as thick red line. Simpler short curves lead to confident predictions, while the challenging curve to the bottom left has drastic uncertainty. Top right curve demonstrates the ability, not offered by classic methods, to deal with uncertain boundary values (the black ellipse represents two standard deviations).

Cost

Because of the inversion of the matrix in Eq. (26), the method described here has computational cost . This may sound costly. However, both and are usually small numbers for problems in Riemannian statistics. Modern ODE solvers rely on a number of additional algorithmic steps to improve performance, which lead to considerable computational overhead. Since point estimates require high precision, the computational cost of existing methods is quite high in practice. We will show below that, since probabilistic estimates can be marginalised within statistical calculations, our algorithm in fact improves computation time.

3.5 Relationship to Runge-Kutta methods

The idea of iteratively constructing estimates for the solution and using them to extrapolate may seem ad hoc. In fact, the step from the established Runge-Kutta (RK) methods to this probabilistic approach is shorter than one might think. RK methods proceed in extrapolation steps of length . Each such step consists of evaluations . The estimate is constructed by a linear combination of the previous evaluations and the initial values: . This is exactly the structure of the mean prediction of Eq. (26). The difference lies in the way the weights are constructed. For RK, they are constructed carefully such that the error between the true solution of the ODE and the estimated solution is . Ensuring this is highly nontrivial, and often cited as a historical achievement of numerical mathematics (Hairer et al., 1987, §II.5). In the Gaussian process framework, the arise much more straightforwardly, from Eq. (26), the prior choice of kernel, and a (currently ad hoc) exploration strategy in . The strength of this is that it naturally gives a probabilistic generative model for solutions, and thus various additional functionality. The downside is that, in this simplistic form, the probabilistic solver can not be expected to give the same local error bounds as RK methods. In fact one can readily construct examples where our specific form of the algorithm has low order. It remains a challenging research question whether the frameworks of nonparametric regression and Runge-Kutta methods can be unified under a consistent probabilistic concept.

4 Probabilistic Calculations on Riemannian Manifolds

We briefly describe how the results from Sec. 3 can be combined with the concepts from Sec. 2 to define tools for Riemannian statistics that marginalise numerical uncertainty of the solver.

Exponential maps

Given an initial point and initial direction , the exponential map can be computed by solving an IVP (3) as described in Sec. 3. This makes direct use of the ability of the solver to deal with uncertain boundary and initial values.

Logarithm maps

The logarithm map (4) is slightly more complicated to compute as the length of a geodesic cannot be computed linearly. We can, however, sample from the logarithm map as follows. Let and and construct the geodesic Gaussian process posterior. Sample and from this process and compute the corresponding logarithm map using Eq. (4). The length of is evaluated using standard quadrature. The mean and covariance of the logarithm map can then be estimated from the samples. Resorting to a sampling method might seem expensive, but in practice the runtime is dominated by the BVP solver.

Computing Means

The mean of is computed using gradient descent (5), so we need to track the uncertainty through this optimisation. Let denote the estimate of the mean at iteration of the gradient descent scheme. First, compute the mean and covariance of

(22)

by sampling as above. Assuming follows a Gaussian distribution with the estimated mean and covariance, we then directly compute the distribution of

(23)

Probabilistic optimisation algorithms (Hennig, 2013), which can directly return probabilistic estimates for the shape of the gradient (5), are a potential alternative.

Computing Principal Geodesics

Given a mean , estimate the mean and covariance of and use these to estimate the mean and covariance of the principal direction . The principal geodesic can then be computed as .

5 Experiments

Figure 3: Results on 1000 images of the digit 1 from the MNIST handwritten digits data set, projected into its first two principal components. Left: Several geodesics from the mean computed using the probabilistic solver (red mean predictions, green samples) as well as Matlab’s bvp5c (orange). Middle: The principal geodesics estimated using the same two solvers. Right: Running time of a mean value computation using different solvers as a function of the dimensionality.

This section reports on experiments with the suggested algorithms. It begins with an illustrative experiment on the well-known MNIST handwritten data set222http://yann.lecun.com/exdb/mnist/, followed by a study of the running time of the algorithms on the same data. Finally, we use the algorithms for visualising changes in human body shape.

5.1 Handwritten Digits

Fig. 3 (next page, for layout reasons), left and centre, show 1000 images of the digit 1 expressed in the first two principal components of the data. This data roughly follows a one-dimensional nonlinear structure, which we estimate using PGA. To find geodesics that follow the local structure of the data, we learn local metric tensors as local inverse covariance matrices, and combine them to form a smoothly changing metric using Eq. 7. The local metric tensors are learned using a standard EM algorithm for Gaussian mixtures (McLachlan and Krishnan, 1997).

For comparison, we consider a state-of-the-art BVP solver as implemented in Matlab in bvp5c (a four-stage implicit Runge-Kutta method (Hairer et al., 1987, §II.7)). This does not provide an uncertain output, so we resort to the deterministic Riemannian statistics algorithms (Sec. 2). Fig. 3, left, shows a few geodesics from the mean computed by our solver as well as bvp5c. The geodesics provided by the two solvers are comparable and tend to follow the structure of the data. Fig. 3, centre, shows the principal geodesics of the data computed using the different solvers; the results are comparable. This impression is supported by a more close comparison of the results: Figure 4 shows curve lengths achieved by the two solvers. While bvp5c tends to perform slightly better, in particular for long curves, the error estimates of the probabilistic solvers reflect the algorithm’s imprecision quite well.

Figure 4: Inferred curve lengths for the 1000 geodesics from Figure 3. The probabilistic estimates for the lengths (mean 2 standard deviations) are plotted against the (assumed reliable) point estimates of Matlab’s bvp5c.

Figure 5: Runtime of bvp5c against that of the probabilistic solver for the same geodesics (note differing abscissa and ordinate scales). The color of points encodes the length of the corresponding curve, from 0 (white) to 200 (dark red), for comparison with Fig. 4.

Even within this simple two-dimensional setting, the running times of the two different solvers can diverge considerably. Figure 5 plots the two runtimes against each other (along with the corresponding curve length, in color). The probabilistic solvers’ slight decrease in precision is accompanied by a decrease in computational cost by about an order of magnitude. Due to its design, the computational cost of the probabilistic solver is almost unaffected by the complexity of the problem (length of the curve). Instead, harder problems simply lead to larger uncertainty. As dimensionality increases, this advantage grows. Fig. 3, right, shows the running time for computations of the mean for an increasing number of dimensions. For this experiment we use 100 images and 5 iterations of the gradient descent scheme. As the plot shows, the probabilistic approach is substantially faster in this experiment.

5.2 Human Body Shape

Figure 6: Samples from the posterior over the principal geodesic through the body shape dataset, estimated under a metric learned to emphasise belly circumference. Six samples each at the inferred manifold mean (centre) and three standard deviations of the dataset in either direction (left, right). The variation is shown in the supplementary video.

We also experimented with human body shape data, using laser scans and anthropometric measurements of the belly circumference of female subjects (Robinette et al., 1999). Body shape is represented using a -dimensional SCAPE model (Anguelov et al., 2005), such that each scan is a vector in . Following Hauberg et al. (2012a), we partition the subjects in clusters by belly circumferences, learn a diagonal Large Margin Nearest Neighbour (LMNN) metric (Weinberger and Saul, 2009) for each cluster, and form a smoothly changing metric using Eq. 7

. Under this metric variance is inflated locally along directions related to belly circumference. We visualise this increased variance using PGA. Based on the timing results above, we only consider the probabilistic algorithm here, which achieved a runtime of

10 minutes.

Fig. 6 shows the first principal geodesic under the learned metric, where we show bodies at the mean as well as three standard deviations from the mean. Uncertainty is visualised by six samples from the principal geodesic at each point; this uncertainty is not described by other solvers (this is more easily seen in the supplementary video). In general, the principal geodesic shows an increasing belly circumference (again, see supplementary video).

6 Conclusion

We have studied a probabilistic solver for ordinary differential equations in the context of Riemannian statistics. The method can deal with uncertain boundary and initial values, and returns a joint posterior over the solution. The theoretical performance of such methods is currently not as tightly bounded as that of Runge-Kutta methods, but their structured error estimates allow a closed probabilistic formulation of computations on Riemannian manifolds. This includes marginalisation over the solution space, which weakens the effect of numerical errors. This, in turn, means computational precision, and thus cost, can be lowered. The resulting improvement in computation time and functionality is an example for the conceptual and applied value of probabilistically designed numerical algorithms for computational statistics.

Acknowledgements

S. Hauberg is supported in part by the Villum Foundation as well as an AWS in Education Machine Learning Research Grant award from Amazon.com.

References

  • Anguelov et al. [2005] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, and J. Davis. Scape: shape completion and animation of people. ACM Transactions on Graphics, 24(3):408–416, 2005.
  • Berger [2007] M. Berger. A Panoramic View of Riemannian Geometry. Springer, 2007.
  • Chkrebtii et al. [2013] O. Chkrebtii, D.A. Campbell, M.A. Girolami, and B. Calderhead. Bayesian uncertainty quantification for differential equations. arXiv, stat.ME 1306.2365, June 2013.
  • Diaconis [1988] P. Diaconis. Bayesian numerical analysis. Statistical decision theory and related topics, IV(1):163–175, 1988.
  • Fletcher et al. [2004] P.T. Fletcher, C. Lu, S.M. Pizer, and S. Joshi. Principal Geodesic Analysis for the study of Nonlinear Statistics of Shape. IEEE Transactions on Medical Imaging, 23(8):995–1005, 2004.
  • Freifeld and Black [2012] O. Freifeld and M.J. Black. Lie bodies: A manifold representation of 3D human shape. In A. Fitzgibbon et al., editor,

    European Conf. on Computer Vision (ECCV)

    , Part I, LNCS 7572, pages 1–14. Springer-Verlag, oct 2012.
  • Hairer et al. [1987] E. Hairer, S.P. Nørsett, and G. Wanner. Solving Ordinary Differential Equations I – Nonstiff Problems. Springer, 1987.
  • Hauberg et al. [2012a] S. Hauberg, O. Freifeld, and M.J. Black. A geometric take on metric learning. In P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems (NIPS) 25, pages 2033–2041. MIT Press, 2012a.
  • Hauberg et al. [2012b] S. Hauberg, S. Sommer, and K.S. Pedersen. Natural metrics and least-committed priors for articulated tracking. Image and Vision Computing, 30(6-7):453–461, 2012b.
  • Hennig [2013] P. Hennig. Fast probabilistic optimization from noisy gradients. In Proceedings of the 30th International Conference on Machine Learning (ICML), 2013.
  • Kutta [1901] W. Kutta. Beitrag zur näherungsweisen Integration totaler Differentialgleichungen. Zeitschrift für Mathematik und Physik, 46:435–453, 1901.
  • McLachlan and Krishnan [1997] G.J. McLachlan and T. Krishnan. The EM Algorithm and its Extensions. Wiley, 1997.
  • O’Hagan [1992] A. O’Hagan. Some Bayesian numerical analysis. Bayesian Statistics, 4:345–363, 1992.
  • Pennec [1999] X. Pennec. Probabilities and statistics on Riemannian manifolds: Basic tools for geometric measurements. In Proceedings of Nonlinear Signal and Image Processing, pages 194–198, 1999.
  • Porikli et al. [2006] F. Porikli, O. Tuzel, and P. Meer. Covariance tracking using model update based on Lie algebra. In

    IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)

    , volume 1, pages 728–735. IEEE, 2006.
  • Rasmussen and Williams [2006] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
  • Robinette et al. [1999] K.M. Robinette, H. Daanen, and E. Paquet. The CAESAR project: a 3-D surface anthropometry survey. In 3-D Digital Imaging and Modeling, pages 380–386, 1999.
  • Runge [1895] C. Runge. Über die numerische Auflösung von Differentialgleichungen. Mathematische Annalen, 46:167–178, 1895.
  • Skilling [1991] J. Skilling. Bayesian solution of ordinary differential equations. Maximum Entropy and Bayesian Methods, Seattle, 1991.
  • Tenenbaum and Pollard [1963] M. Tenenbaum and H. Pollard. Ordinary Differential Equations. Dover Publications, 1963.
  • van Loan [2000] C.F. van Loan. The ubiquitous Kronecker product. J of Computational and Applied Mathematics, 123:85–100, 2000.
  • Weinberger and Saul [2009] K.Q. Weinberger and L.K. Saul. Distance metric learning for large margin nearest neighbor classification. The Journal of Machine Learning Research, 10:207–244, 2009.

Appendix A Gaussian Process Posteriors

Equations (10), (12), (18) and (19) in the main paper are Gaussian process posterior distributions over the curve arising from observations of various combinations of derivatives of . These forms arise from the following general result.333Equation A.6 in C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006 Consider a Gaussian process prior distribution

(24)

over the function , and observations with the likelihood

(25)

with a linear operator . This includes the special cases of the selection operator which selects function values , and the special case of derivative operators which give . Then the posterior over any linear map of the curve (including , giving ) is

(26)

And the marginal probability for is

(27)

The classic example is that of the marginal posterior at arising from noisy observations at . This is the case of and , which gives

(28)
(29)
(30)
(31)

and so on. All the Gaussian forms in the paper are special cases with various combinations of and .

Appendix B Covariance Functions

The models in the paper assume a squared-exponential (aka. radial basis function, Gaussian) covariance function between values of the function , of the form

(32)

The calculations require the covariance between various combinations of derivatives of the function. For clear notation, we’ll use the operator , and the abbreviation

(33)
(34)
(35)
(36)
(37)

Of course, all those derivatives retain the Kronecker structure of the original kernel, because .

Appendix C Inferring Hyperparameters

Perhaps the most widely used way to learn hyperparameters for Gaussian process models it type-II maximum likelihood estimation, also known as evidence maximisation: The marginal probability for the observations is . Using the shorthand , its logarithm is

(38)

To optimise this expression with respect to the length scale , we use

(39)

From Equation (37), and using

(40)

we find

(41)
(42)

It is also easy to evaluate the second derivative, giving

(43)
(44)
(45)

This allows constructing a Newton-Raphson optimisation scheme for the length scale of the algorithm.