Sparse Bayesian Unsupervised Learning

01/30/2014
by   Stéphane Gaïffas, et al.
0

This paper is about variable selection, clustering and estimation in an unsupervised high-dimensional setting. Our approach is based on fitting constrained Gaussian mixture models, where we learn the number of clusters K and the set of relevant variables S using a generalized Bayesian posterior with a sparsity inducing prior. We prove a sparsity oracle inequality which shows that this procedure selects the optimal parameters K and S. This procedure is implemented using a Metropolis-Hastings algorithm, based on a clustering-oriented greedy proposal, which makes the convergence to the posterior very fast.

READ FULL TEXT
research
01/31/2017

Variable selection for clustering with Gaussian mixture models: state of the art

The mixture models have become widely used in clustering, given its prob...
research
07/21/2022

Bayesian Sparse Gaussian Mixture Model in High Dimensions

We establish the minimax risk for parameter estimation in sparse high-di...
research
05/03/2021

A Metropolized adaptive subspace algorithm for high-dimensional Bayesian variable selection

A simple and efficient adaptive Markov Chain Monte Carlo (MCMC) method, ...
research
07/18/2021

Decoupling Shrinkage and Selection for the Bayesian Quantile Regression

This paper extends the idea of decoupling shrinkage and sparsity for con...
research
05/14/2021

Posterior Regularisation on Bayesian Hierarchical Mixture Clustering

We study a recent inferential framework, named posterior regularisation,...
research
07/19/2023

Entropy regularization in probabilistic clustering

Bayesian nonparametric mixture models are widely used to cluster observa...
research
01/07/2014

Key point selection and clustering of swimmer coordination through Sparse Fisher-EM

To answer the existence of optimal swimmer learning/teaching strategies,...

Please sign up or login with your details

Forgot password? Click here to reset