Bayesian Precision Factor Analysis for High-dimensional Sparse Gaussian Graphical Models
Gaussian graphical models are popular tools for studying the dependence relationships between different random variables. We propose a novel approach to Gaussian graphical models that relies on decomposing the precision matrix encoding the conditional independence relationships into a low rank and a diagonal component. Such decompositions are already popular for modeling large covariance matrices as they admit a latent factor based representation that allows easy inference but are yet to garner widespread use in precision matrix models due to their computational intractability. We show that a simple latent variable representation for such decomposition in fact exists for precision matrices as well. The latent variable construction provides fundamentally novel insights into Gaussian graphical models. It is also immediately useful in Bayesian settings in achieving efficient posterior inference via a straightforward Gibbs sampler that scales very well to high-dimensional problems far beyond the limits of the current state-of-the-art. The ability to efficiently explore the full posterior space allows the model uncertainty to be easily assessed and the underlying graph to be determined via a novel posterior false discovery rate control procedure. The decomposition also crucially allows us to adapt sparsity inducing priors to shrink insignificant off-diagonal entries toward zero, making the approach adaptable to high-dimensional small-sample-size sparse settings. We evaluate the method's empirical performance through synthetic experiments and illustrate its practical utility in data sets from two different application domains.
READ FULL TEXT