On Bayesian Estimation And Proximity Operators

07/11/2018
by   Rémi Gribonval, et al.
0

There are two major routes to address the ubiquitous family of inverse problems appearing in signal and image processing, such as denoising or deblurring. A first route relies on Bayesian modeling, where prior probabilities are used to embody models of both the distribution of the unknown variables and their statistical dependence with the observed data. The estimation process typically relies on the minimization of an expected loss (e.g. minimum mean squared error, or MMSE). The second route has received much attention in the context of sparse regularization and compressive sensing: it consists in designing (often convex) optimization problems involving the sum of a data fidelity term and a penalty term promoting certain types of unknowns (e.g., sparsity, promoted through an 1 norm). Well known relations between these two approaches have lead to some widely spread misconceptions. In particular, while the so-called Maximum A Posterori (MAP) estimate with a Gaussian noise model does lead to an optimization problem with a quadratic data-fidelity term, we disprove through explicit examples the common belief that the converse would be true. It has already been shown [7, 9] that for denoising in the presence of additive Gaussian noise, for any prior probability on the unknowns, MMSE estimation can be expressed as a penalized least squares problem, with the apparent characteristics of a MAP estimation problem with Gaussian noise and a (generally) different prior on the unknowns. In other words, the variational approach is rich enough to build all possible MMSE estimators associated to additive Gaussian noise via a well chosen penalty. We generalize these results beyond Gaussian denoising and characterize noise models for which the same phenomenon occurs. In particular, we prove that with (a variant of) Poisson noise and any prior probability on the unknowns, MMSE estimation can again be expressed as the solution of a penalized least squares optimization problem. For additive scalar denois-ing the phenomenon holds if and only if the noise distribution is log-concave. In particular, Laplacian denoising can (perhaps surprisingly) be expressed as the solution of a penalized least squares problem. In the multivariate case, the same phenomenon occurs when the noise model belongs to a particular subset of the exponential family. For multivariate additive denoising, the phenomenon holds if and only if the noise is white and Gaussian.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2020

Solving Linear Inverse Problems Using the Prior Implicit in a Denoiser

Prior probability models are a central component of many image processin...
research
05/28/2021

On Hamilton-Jacobi PDEs and image denoising models with certain non-additive noise

We consider image denoising problems formulated as variational problems....
research
07/17/2013

Processing stationary noise: model and parameter selection in variational methods

Additive or multiplicative stationary noise recently became an important...
research
07/19/2021

Mismatched Estimation of rank-one symmetric matrices under Gaussian noise

We consider the estimation of an n-dimensional vector s from the noisy e...
research
01/23/2020

Geometric Conditions for the Discrepant Posterior Phenomenon and Connections to Simpson's Paradox

The discrepant posterior phenomenon (DPP) is a counterintuitive phenomen...
research
09/22/2016

An equivalence between high dimensional Bayes optimal inference and M-estimation

When recovering an unknown signal from noisy measurements, the computati...
research
09/06/2017

On the exact relationship between the denoising function and the data distribution

We prove an exact relationship between the optimal denoising function an...

Please sign up or login with your details

Forgot password? Click here to reset