On Maximum-a-Posteriori estimation with Plug Play priors and stochastic gradient descent

01/16/2022
by   Rémi Laumont, et al.
7

Bayesian methods to solve imaging inverse problems usually combine an explicit data likelihood function with a prior distribution that explicitly models expected properties of the solution. Many kinds of priors have been explored in the literature, from simple ones expressing local properties to more involved ones exploiting image redundancy at a non-local scale. In a departure from explicit modelling, several recent works have proposed and studied the use of implicit priors defined by an image denoising algorithm. This approach, commonly known as Plug Play (PnP) regularisation, can deliver remarkably accurate results, particularly when combined with state-of-the-art denoisers based on convolutional neural networks. However, the theoretical analysis of PnP Bayesian models and algorithms is difficult and works on the topic often rely on unrealistic assumptions on the properties of the image denoiser. This papers studies maximum-a-posteriori (MAP) estimation for Bayesian models with PnP priors. We first consider questions related to existence, stability and well-posedness, and then present a convergence proof for MAP computation by PnP stochastic gradient descent (PnP-SGD) under realistic assumptions on the denoiser used. We report a range of imaging experiments demonstrating PnP-SGD as well as comparisons with other PnP schemes.

READ FULL TEXT

page 16

page 17

page 22

page 23

page 26

research
03/08/2021

Bayesian imaging using Plug Play priors: when Langevin meets Tweedie

Since the seminal work of Venkatakrishnan et al. (2013), Plug Play (...
research
06/20/2020

A Fast Stochastic Plug-and-Play ADMM for Imaging Inverse Problems

In this work we propose an efficient stochastic plug-and-play (PnP) algo...
research
07/07/2017

Fast Stochastic Hierarchical Bayesian MAP for Tomographic Imaging

Any image recovery algorithm attempts to achieve the highest quality rec...
research
04/30/2020

On the Discrepancy Principle for Stochastic Gradient Descent

Stochastic gradient descent (SGD) is a promising numerical method for so...
research
03/16/2023

Stochastic gradient descent for linear inverse problems in variable exponent Lebesgue spaces

We consider a stochastic gradient descent (SGD) algorithm for solving li...
research
07/01/2022

Maximum a posteriori estimators in ℓ^p are well-defined for diagonal Gaussian priors

We prove that maximum a posteriori estimators are well-defined for diago...
research
05/01/2019

LS-SVR as a Bayesian RBF network

We show the theoretical equivalence between the Least Squares Support Ve...

Please sign up or login with your details

Forgot password? Click here to reset