Bayesian imaging using Plug Play priors: when Langevin meets Tweedie

03/08/2021
by   Rémi Laumont, et al.
10

Since the seminal work of Venkatakrishnan et al. (2013), Plug Play (PnP) methods have become ubiquitous in Bayesian imaging. These methods derive Minimum Mean Square Error (MMSE) or Maximum A Posteriori (MAP) estimators for inverse problems in imaging by combining an explicit likelihood function with a prior that is implicitly defined by an image denoising algorithm. The PnP algorithms proposed in the literature mainly differ in the iterative schemes they use for optimisation or for sampling. In the case of optimisation schemes, some recent works guarantee the convergence to a fixed point, albeit not necessarily a MAP estimate. In the case of sampling schemes, to the best of our knowledge, there is no known proof of convergence. There also remain important open questions regarding whether the underlying Bayesian models and estimators are well defined, well-posed, and have the basic regularity properties required to support these numerical schemes. To address these limitations, this paper develops theory, methods, and provably convergent algorithms for performing Bayesian inference with PnP priors. We introduce two algorithms: 1) PnP-ULA (Unadjusted Langevin Algorithm) for Monte Carlo sampling and MMSE inference; and 2) PnP-SGD (Stochastic Gradient Descent) for MAP inference. Using recent results on the quantitative convergence of Markov chains, we establish detailed convergence guarantees for these two algorithms under realistic assumptions on the denoising operators used, with special attention to denoisers based on deep neural networks. We also show that these algorithms approximately target a decision-theoretically optimal Bayesian model that is well-posed. The proposed algorithms are demonstrated on several canonical problems such as image deblurring, inpainting, and denoising, where they are used for point estimation as well as for uncertainty visualisation and quantification.

READ FULL TEXT

page 17

page 18

page 19

page 20

page 21

page 23

page 24

page 25

research
01/16/2022

On Maximum-a-Posteriori estimation with Plug Play priors and stochastic gradient descent

Bayesian methods to solve imaging inverse problems usually combine an ex...
research
11/26/2019

Maximum likelihood estimation of regularisation parameters in high-dimensional inverse problems: an empirical Bayesian approach

Many imaging problems require solving an inverse problem that is ill-con...
research
03/24/2022

Multilevel Bayesian Deep Neural Networks

In this article we consider Bayesian inference associated to deep neural...
research
06/20/2020

A Fast Stochastic Plug-and-Play ADMM for Imaging Inverse Problems

In this work we propose an efficient stochastic plug-and-play (PnP) algo...
research
10/07/2021

Gradient Step Denoiser for convergent Plug-and-Play

Plug-and-Play methods constitute a class of iterative algorithms for ima...
research
06/27/2012

Large Scale Variational Bayesian Inference for Structured Scale Mixture Models

Natural image statistics exhibit hierarchical dependencies across multip...
research
07/01/2022

Maximum a posteriori estimators in ℓ^p are well-defined for diagonal Gaussian priors

We prove that maximum a posteriori estimators are well-defined for diago...

Please sign up or login with your details

Forgot password? Click here to reset