Random weighting to approximate posterior inference in LASSO regression

02/07/2020
by   Tun Lee Ng, et al.
0

We consider a general-purpose approximation approach to Bayesian inference in which repeated optimization of a randomized objective function provides surrogate samples from the joint posterior distribution. In the context of LASSO regression, we repeatedly assign independently-drawn standard-exponential random weights to terms in the objective function, and optimize to obtain the surrogate samples. We establish the asymptotic properties of this method under different regularization parameters λ_n. In particular, if λ_n = o(√(n)), then the random-weighting (weighted bootstrap) samples are equivalent (up to the first order) to the Bayesian posterior samples. If λ_n = O( n^c ) for some 1/2 < c < 1, then these samples achieve conditional model selection consistency. We also establish the asymptotic properties of the random-weighting method when weights are drawn from other distributions, and also if weights are assigned to the LASSO penalty terms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2020

Understanding Variational Inference in Function-Space

Recent work has attempted to directly approximate the `function-space' o...
research
01/23/2020

The Reciprocal Bayesian LASSO

A reciprocal LASSO (rLASSO) regularization employs a decreasing penalty ...
research
03/26/2021

Introducing prior information in Weighted Likelihood Bootstrap with applications to model misspecification

We propose Posterior Bootstrap, a set of algorithms extending Weighted L...
research
07/07/2016

Kernel Bayesian Inference with Posterior Regularization

We propose a vector-valued regression problem whose solution is equivale...
research
05/18/2023

Massively Parallel Reweighted Wake-Sleep

Reweighted wake-sleep (RWS) is a machine learning method for performing ...
research
10/21/2021

Asymptotics of cut distributions and robust modular inference using Posterior Bootstrap

Bayesian inference provides a framework to combine an arbitrary number o...
research
02/01/2020

Interpreting a Penalty as the Influence of a Bayesian Prior

In machine learning, it is common to optimize the parameters of a probab...

Please sign up or login with your details

Forgot password? Click here to reset