Max-and-Smooth: a two-step approach for approximate Bayesian inference in latent Gaussian models

07/27/2019
by   Birgir Hrafnkelsson, et al.
0

With modern high-dimensional data, complex statistical models are necessary, requiring computationally feasible inference schemes. We introduce Max-and-Smooth, an approximate Bayesian inference scheme for a flexible class of latent Gaussian models (LGMs) where one or more of the likelihood parameters are modeled by latent additive Gaussian processes. Max-and-Smooth consists of two-steps. In the first step (Max), the likelihood function is approximated by a Gaussian density with mean and covariance equal to either (a) the maximum likelihood estimate and the inverse observed information, respectively, or (b) the mean and covariance of the normalized likelihood function. In the second step (Smooth), the latent parameters and hyperparameters are inferred and smoothed with the approximated likelihood function. The proposed method ensures that the uncertainty from the first step is correctly propagated to the second step. Since the approximated likelihood function is Gaussian, the approximate posterior density of the latent parameters of the LGM (conditional on the hyperparameters) is also Gaussian, thus facilitating efficient posterior inference in high dimensions. Furthermore, the approximate marginal posterior distribution of the hyperparameters is tractable, and as a result, the hyperparameters can be sampled independently of the latent parameters. In the case of a large number of independent data replicates, sparse precision matrices, and high-dimensional latent vectors, the speedup is substantial in comparison to an MCMC scheme that infers the posterior density from the exact likelihood function. The proposed inference scheme is demonstrated on one spatially referenced real dataset and on simulated data mimicking spatial, temporal, and spatio-temporal inference problems. Our results show that Max-and-Smooth is accurate and fast.

READ FULL TEXT

page 23

page 41

page 42

research
12/31/2019

Approximate Inference for Fully Bayesian Gaussian Process Regression

Learning in Gaussian Process models occurs through the adaptation of hyp...
research
05/23/2018

Likelihood-free inference with emulator networks

Approximate Bayesian Computation (ABC) provides methods for Bayesian inf...
research
12/17/2021

MUSE: Marginal Unbiased Score Expansion and Application to CMB Lensing

We present the marginal unbiased score expansion (MUSE) method, an algor...
research
09/22/2022

Simulation-based inference of Bayesian hierarchical models while checking for model misspecification

This paper presents recent methodological advances to perform simulation...
research
10/06/2021

Latent Gaussian Models for High-Dimensional Spatial Extremes

In this chapter, we show how to efficiently model high-dimensional extre...
research
11/21/2020

Gaussian orthogonal latent factor processes for large incomplete matrices of correlated data

We introduce the Gaussian orthogonal latent factor processes for modelin...
research
03/15/2022

Amortised inference of fractional Brownian motion with linear computational complexity

We introduce a simulation-based, amortised Bayesian inference scheme to ...

Please sign up or login with your details

Forgot password? Click here to reset