Adaptive Dimension Reduction to Accelerate Infinite-Dimensional Geometric Markov Chain Monte Carlo

07/15/2018
by   Shiwei Lan, et al.
0

Bayesian inverse problems highly rely on efficient and effective inference methods for uncertainty quantification (UQ). Infinite-dimensional MCMC algorithms, directly defined on function spaces, are robust under refinement of physical models. Recent development of this class of algorithms has started to incorporate the geometry of the posterior informed by data so that they are capable of exploring complex probability structures. However, the required geometric quantities are usually expensive to obtain in high dimensions. On the other hand, most geometric information of the unknown parameter space in this setting is concentrated in an intrinsic finite-dimensional subspace. To mitigate the computational intensity and scale up the applications of infinite-dimensional geometric MCMC (∞-GMC), we apply geometry-informed algorithms to the intrinsic subspace to probe its complex structure, and simpler methods like preconditioned Crank-Nicolson (pCN) to its geometry-flat complementary subspace. In this work, we take advantage of dimension reduction techniques to accelerate the original ∞-GMC algorithms. More specifically, partial spectral decomposition of the (prior or posterior) covariance operator is used to identify certain number of principal eigen-directions as a basis for the intrinsic subspace. The combination of dimension-independent algorithms, geometric information, and dimension reduction yields more efficient implementation, (adaptive) dimension-reduced infinite-dimensional geometric MCMC. With a small amount of computational overhead, we can achieve over 70 times speed-up compared to pCN using a simulated elliptic inverse problem and an inverse problem involving turbulent combustion. A number of error bounds comparing various MCMC proposals are presented to predict the asymptotic behavior of the proposed dimension-reduced algorithms.

READ FULL TEXT

page 12

page 13

page 14

page 16

page 18

page 24

research
01/07/2021

A unified performance analysis of likelihood-informed subspace methods

The likelihood-informed subspace (LIS) method offers a viable route to r...
research
01/11/2021

Scaling Up Bayesian Uncertainty Quantification for Inverse Problems using Deep Neural Networks

Due to the importance of uncertainty quantification (UQ), Bayesian appro...
research
09/08/2017

Likelihood informed dimension reduction for inverse problems in remote sensing of atmospheric constituent profiles

We use likelihood informed dimension reduction (LIS) (T. Cui et al. 2014...
research
10/28/2019

Multilevel Dimension-Independent Likelihood-Informed MCMC for Large-Scale Inverse Problems

We present a non-trivial integration of dimension-independent likelihood...
research
05/08/2021

Spline-Based Bayesian Emulators for Large Scale Spatial Inverse Problems

A Bayesian approach to nonlinear inverse problems is considered where th...
research
07/18/2022

Gradient-based data and parameter dimension reduction for Bayesian models: an information theoretic perspective

We consider the problem of reducing the dimensions of parameters and dat...
research
02/22/2022

A gradient-free subspace-adjusting ensemble sampler for infinite-dimensional Bayesian inverse problems

Sampling of sharp posteriors in high dimensions is a challenging problem...

Please sign up or login with your details

Forgot password? Click here to reset