Approximation and sampling of multivariate probability distributions in the tensor train decomposition

10/02/2018
by   Sergey Dolgov, et al.
0

General multivariate distributions are notoriously expensive to sample from, particularly the high-dimensional posterior distributions in PDE-constrained inverse problems. This paper develops a sampler for arbitrary continuous multivariate distributions that is based on low-rank surrogates in the tensor-train format. We construct a tensor-train approximation to the target probability density function using the cross interpolation, which requires a small number of function evaluations. For sufficiently smooth distributions the storage required for the TT approximation is moderate, scaling linearly with dimension. The structure of the tensor-train surrogate allows efficient sampling by the conditional distribution method. Unbiased estimates may be calculated by correcting the transformed random seeds using a Metropolis--Hastings accept/reject step. Moreover, one can use a more efficient quasi-Monte Carlo quadrature that may be corrected either by a control-variate strategy, or by importance weighting. We prove that the error in the tensor-train approximation propagates linearly into the Metropolis--Hastings rejection rate and the integrated autocorrelation time of the resulting Markov chain. These methods are demonstrated in three computed examples: fitting failure time of shock absorbers; a PDE-constrained inverse diffusion problem; and sampling from the Rosenbrock distribution. The delayed rejection adaptive Metropolis (DRAM) algorithm is used as a benchmark. We find that the importance-weight corrected quasi-Monte Carlo quadrature performs best in all computed examples, and is orders-of-magnitude more efficient than DRAM across a wide range of approximation accuracies and sample sizes. Indeed, all the methods developed here significantly outperform DRAM in all computed examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/10/2021

Randomised maximum likelihood based posterior sampling

Minimization of a stochastic cost function is commonly used for approxim...
research
11/21/2022

Approximation in the extended functional tensor train format

This work proposes the extended functional tensor train (EFTT) format fo...
research
04/28/2023

Quasi-Monte Carlo methods for mixture distributions and approximated distributions via piecewise linear interpolation

We study numerical integration over bounded regions in ℝ^s, s≥1 with res...
research
08/10/2020

Low-rank tensor reconstruction of concentrated densities with application to Bayesian inversion

Transport maps have become a popular mechanic to express complicated pro...
research
11/09/2021

TTRISK: Tensor Train Decomposition Algorithm for Risk Averse Optimization

This article develops a new algorithm named TTRISK to solve high-dimensi...
research
09/09/2019

A direct Hamiltonian MCMC approach for reliability estimation

Accurate and efficient estimation of rare events probabilities is of sig...
research
08/19/2019

Probability Estimation with Truncated Inverse Binomial Sampling

In this paper, we develop a general theory of truncated inverse binomial...

Please sign up or login with your details

Forgot password? Click here to reset