Bayesian ODE Solvers: The Maximum A Posteriori Estimate

04/01/2020
by   Filip Tronarp, et al.
40

It has recently been established that the numerical solution of ordinary differential equations can be posed as a nonlinear Bayesian inference problem, which can be approximately solved via Gaussian filtering and smoothing, whenever a Gauss–Markov prior is used. In this paper the class of ν times differentiable linear time invariant Gauss–Markov priors is considered. A taxonomy of Gaussian estimators is established, with the maximum a posteriori estimate at the top of the hierarchy, which can be computed with the iterated extended Kalman smoother. The remaining three classes are termed explicit, semi-implicit, and implicit, which are in similarity with the classical notions corresponding to conditions on the vector field, under which the filter update produces a local maximum a posteriori estimate. The maximum a posteriori estimate corresponds to an optimal interpolant in the reproducing Hilbert space associated with the prior, which in the present case is equivalent to a Sobolev space of smoothness ν+1. Consequently, using methods from scattered data approximation and nonlinear analysis in Sobolev spaces, it is shown that the maximum a posteriori estimate converges to the true solution at a polynomial rate in the fill-distance (maximum step size) subject to mild conditions on the vector field. The methodology developed provides a novel and more natural approach to study the convergence of these estimators than classical methods of convergence analysis. The methods and theoretical results are demonstrated in numerical examples.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

09/25/2017

Bayesian Filtering for ODEs with Bounded Derivatives

Recently there has been increasing interest in probabilistic solvers for...
05/28/2020

A priori and a posteriori error analysis for the Nitsche's method of a reduced Landau-de Gennes problem

The equilibrium configurations of a two dimensional planar bistable nema...
10/08/2018

The Viterbi process, decay-convexity and parallelized maximum a-posteriori estimation

The Viterbi process is the limiting maximum a-posteriori estimate of the...
07/12/2021

Constrained Optimal Smoothing and Bayesian Estimation

In this paper, we extend the correspondence between Bayesian estimation ...
05/01/2019

LS-SVR as a Bayesian RBF network

We show the theoretical equivalence between the Least Squares Support Ve...
03/08/2021

Bayesian imaging using Plug Play priors: when Langevin meets Tweedie

Since the seminal work of Venkatakrishnan et al. (2013), Plug Play (...
04/14/2008

A constructive proof of the existence of Viterbi processes

Since the early days of digital communication, hidden Markov models (HMM...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.