Fast Algorithms for Learning Latent Variables in Graphical Models

by   Mohammadreza Soltani, et al.

We study the problem of learning latent variables in Gaussian graphical models. Existing methods for this problem assume that the precision matrix of the observed variables is the superposition of a sparse and a low-rank component. In this paper, we focus on the estimation of the low-rank component, which encodes the effect of marginalization over the latent variables. We introduce fast, proper learning algorithms for this problem. In contrast with existing approaches, our algorithms are manifestly non-convex. We support their efficacy via a rigorous theoretical analysis, and show that our algorithms match the best possible in terms of sample complexity, while achieving computational speed-ups over existing methods. We complement our theory with several numerical experiments.



page 1

page 2

page 3

page 4


Improved Algorithms for Matrix Recovery from Rank-One Projections

We consider the problem of estimation of a low-rank matrix from a limite...

Ising Models with Latent Conditional Gaussian Variables

Ising models describe the joint probability distribution of a vector of ...

Low Complexity Gaussian Latent Factor Models and a Blessing of Dimensionality

Learning the structure of graphical models from data is a fundamental pr...

Sequential Local Learning for Latent Graphical Models

Learning parameters of latent graphical models (GM) is inherently much h...

SILVar: Single Index Latent Variable Models

A semi-parametric, non-linear regression model in the presence of latent...

Learning Gaussian Graphical Models with Observed or Latent FVSs

Gaussian Graphical Models (GGMs) or Gauss Markov random fields are widel...

GP-select: Accelerating EM using adaptive subspace preselection

We propose a nonparametric procedure to achieve fast inference in genera...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.