Sparse Bayesian Lasso via a Variable-Coefficient ℓ_1 Penalty

11/09/2022
by   Nathan Wycoff, et al.
0

Modern statistical learning algorithms are capable of amazing flexibility, but struggle with interpretability. One possible solution is sparsity: making inference such that many of the parameters are estimated as being identically 0, which may be imposed through the use of nonsmooth penalties such as the ℓ_1 penalty. However, the ℓ_1 penalty introduces significant bias when high sparsity is desired. In this article, we retain the ℓ_1 penalty, but define learnable penalty weights λ_p endowed with hyperpriors. We start the article by investigating the optimization problem this poses, developing a proximal operator associated with the ℓ_1 norm. We then study the theoretical properties of this variable-coefficient ℓ_1 penalty in the context of penalized likelihood. Next, we investigate application of this penalty to Variational Bayes, developing a model we call the Sparse Bayesian Lasso which allows for behavior qualitatively like Lasso regression to be applied to arbitrary variational models. In simulation studies, this gives us the Uncertainty Quantification and low bias properties of simulation-based approaches with an order of magnitude less computation. Finally, we apply our methodology to a Bayesian lagged spatiotemporal regression model of internal displacement that occurred during the Iraqi Civil War of 2013-2017.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset