Bayesian anti-sparse coding

12/18/2015
by   Clément Elvira, et al.
0

Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation coefficients exhibits relevant properties in various applications such as digital communications. Anti-sparse regularization can be naturally expressed through an ℓ_∞-norm penalty. This paper derives a probabilistic formulation of such representations. A new probability distribution, referred to as the democratic prior, is first introduced. Its main properties as well as three random variate generators for this distribution are derived. Then this probability distribution is used as a prior to promote anti-sparsity in a Gaussian linear inverse problem, yielding a fully Bayesian formulation of anti-sparse coding. Two Markov chain Monte Carlo (MCMC) algorithms are proposed to generate samples according to the posterior distribution. The first one is a standard Gibbs sampler. The second one uses Metropolis-Hastings moves that exploit the proximity mapping of the log-posterior distribution. These samples are used to approximate maximum a posteriori and minimum mean square error estimators of both parameters and hyperparameters. Simulations on synthetic data illustrate the performances of the two proposed samplers, for both complete and over-complete dictionaries. All results are compared to the recent deterministic variational FITRA algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/11/2021

A Bayesian level set method for an inverse medium scattering problem in acoustics

In this work, we are interested in the determination of the shape of the...
research
11/15/2009

A Hierarchical Bayesian Model for Frame Representation

In many signal processing problems, it may be fruitful to represent the ...
research
04/16/2018

Split-and-augmented Gibbs sampler - Application to large-scale inference problems

Recently, a new class of Markov chain Monte Carlo (MCMC) algorithms took...
research
11/02/2018

Efficient Marginalization-based MCMC Methods for Hierarchical Bayesian Inverse Problems

Hierarchical models in Bayesian inverse problems are characterized by an...
research
10/15/2018

Inverse Problems and Data Assimilation

These notes are designed with the aim of providing a clear and concise i...
research
07/11/2017

Initialising Kernel Adaptive Filters via Probabilistic Inference

We present a probabilistic framework for both (i) determining the initia...
research
03/04/2020

Bayesian System ID: Optimal management of parameter, model, and measurement uncertainty

We evaluate the robustness of a probabilistic formulation of system iden...

Please sign up or login with your details

Forgot password? Click here to reset