Error bounds for some approximate posterior measures in Bayesian inference

11/13/2019
by   Han Cheng Lie, et al.
0

In certain applications involving the solution of a Bayesian inverse problem, it may not be possible or desirable to evaluate the full posterior, e.g. due to the high computational cost. This problem motivates the use of approximate posteriors that arise from approximating the negative log-likelihood or forward model. We review some error bounds for random and deterministic approximate posteriors that arise when the approximate negative log-likelihoods and approximate forward models are random.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/15/2017

Random forward models and log-likelihoods in Bayesian inverse problems

We consider the use of randomised forward models and log-likelihoods wit...
research
12/01/2022

Are you using test log-likelihood correctly?

Test log-likelihood is commonly used to compare different models of the ...
research
06/17/2019

On the Locally Lipschitz Robustness of Bayesian Inverse Problems

In this note we consider the robustness of posterior measures occuring i...
research
11/17/2020

Generalized Posteriors in Approximate Bayesian Computation

Complex simulators have become a ubiquitous tool in many scientific disc...
research
11/20/2020

Gradient Regularisation as Approximate Variational Inference

Variational inference in Bayesian neural networks is usually performed u...
research
10/13/2017

Automated Scalable Bayesian Inference via Hilbert Coresets

The automation of posterior inference in Bayesian data analysis has enab...
research
06/20/2018

Random Feature Stein Discrepancies

Computable Stein discrepancies have been deployed for a variety of appli...

Please sign up or login with your details

Forgot password? Click here to reset