Rotation Invariant Householder Parameterization for Bayesian PCA

05/12/2019
by   Rajbir S. Nirwan, et al.
0

We consider probabilistic PCA and related factor models from a Bayesian perspective. These models are in general not identifiable as the likelihood has a rotational symmetry. This gives rise to complicated posterior distributions with continuous subspaces of equal density and thus hinders efficiency of inference as well as interpretation of obtained parameters. In particular, posterior averages over factor loadings become meaningless and only model predictions are unambiguous. Here, we propose a parameterization based on Householder transformations, which remove the rotational symmetry of the posterior. Furthermore, by relying on results from random matrix theory, we establish the parameter distribution which leaves the model unchanged compared to the original rotationally symmetric formulation. In particular, we avoid the need to compute the Jacobian determinant of the parameter transformation. This allows us to efficiently implement probabilistic PCA in a rotation invariant fashion in any state of the art toolbox. Here, we implemented our model in the probabilistic programming language Stan and illustrate it on several examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/07/2020

Model selection in the space of Gaussian models invariant by symmetry

We consider multivariate centred Gaussian models for the random variable...
research
05/19/2016

Bayesian Variable Selection for Globally Sparse Probabilistic PCA

Sparse versions of principal component analysis (PCA) have imposed thems...
research
01/18/2019

Probabilistic symmetry and invariant neural networks

In an effort to improve the performance of deep neural networks in data-...
research
03/28/2022

Bayesian inverse problems using homotopy

In solving Bayesian inverse problems, it is often desirable to use a com...
research
08/03/2013

Measure Transformer Semantics for Bayesian Machine Learning

The Bayesian approach to machine learning amounts to computing posterior...
research
11/04/2021

Symmetry-Aware Autoencoders: s-PCA and s-nlPCA

Nonlinear principal component analysis (nlPCA) via autoencoders has attr...
research
12/19/2013

Detecting Parameter Symmetries in Probabilistic Models

Probabilistic models often have parameters that can be translated, scale...

Please sign up or login with your details

Forgot password? Click here to reset