Posterior Impropriety of some Sparse Bayesian Learning Models

08/01/2020
by   Anand Dixit, et al.
0

Sparse Bayesian learning models are typically used for prediction in datasets with significantly greater number of covariates than observations. Among the class of sparse Bayesian learning models, relevance vector machines (RVM) is very popular. Its popularity is demonstrated by a large number of citations of the original RVM paper of Tipping (2001)[JMLR, 1, 211 - 244]. In this article we show that RVM and some other sparse Bayesian learning models with hyperparameter values currently used in the literature are based on improper posteriors. Further, we also provide necessary and sufficient conditions for posterior propriety of RVM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/05/2021

Analyzing Relevance Vector Machines using a single penalty approach

Relevance vector machine (RVM) is a popular sparse Bayesian learning mod...
research
02/11/2022

Posterior Consistency for Bayesian Relevance Vector Machines

Statistical modeling and inference problems with sample sizes substantia...
research
08/20/2019

A Noise-Robust Fast Sparse Bayesian Learning Model

This paper utilizes the hierarchical model structure from the Bayesian L...
research
03/03/2023

EZtune: A Package for Automated Hyperparameter Tuning in R

Statistical learning models have been growing in popularity in recent ye...
research
05/17/2020

Posterior properties of the Weibull distribution for censored data

The Weibull distribution is one of the most used tools in reliability an...
research
10/29/2018

Prior-preconditioned conjugate gradient method for accelerated Gibbs sampling in "large n & large p" sparse Bayesian regression

In a modern observational study based on healthcare databases, the numbe...

Please sign up or login with your details

Forgot password? Click here to reset