Graphical Lasso and Thresholding: Equivalence and Closed-form Solutions

08/30/2017
by   Salar Fattahi, et al.
0

Graphical Lasso (GL) is a popular method for learning the structure of an undirected graphical model, which is based on an l_1 regularization technique. The first goal of this work is to study the behavior of the optimal solution of GL as a function of its regularization coefficient. We show that if the number of samples is not too small compared to the number of parameters, the sparsity pattern of the optimal solution of GL changes gradually when the regularization coefficient increases from 0 to infinity. The second objective of this paper is to compare the computationally-heavy GL technique with a numerically-cheap heuristic method for learning graphical models that is based on simply thresholding the sample correlation matrix. To this end, two notions of sign-consistent and inverse-consistent matrices are developed, and then it is shown that the thresholding and GL methods are equivalent if: (i) the thresholded sample correlation matrix is both sign-consistent and inverse-consistent, and (ii) the gap between the largest thresholded and the smallest un-thresholded entries of the sample correlation matrix is not too small. By building upon this result, it is proved that the GL method--as a conic optimization problem--has an explicit closed-form solution if the thresholded sample correlation matrix has an acyclic structure. This result is then generalized to arbitrary sparse support graphs, where a formula is found to obtain an approximate solution of GL. The closed-form solution approximately satisfies the KKT conditions for the GL problem and, more importantly, the approximation error decreases exponentially fast with respect to the length of the minimum-length cycle of the sparsity graph. The developed results are demonstrated on synthetic data, electrical circuits, functional MRI data, and traffic flows for transportation networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/24/2017

Sparse Inverse Covariance Estimation for Chordal Structures

In this paper, we consider the Graphical Lasso (GL), a popular optimizat...
research
10/30/2010

Sparse Inverse Covariance Selection via Alternating Linearization Methods

Gaussian graphical models are of great interest in statistical learning....
research
02/01/2019

On the Closed-form Proximal Mapping and Efficient Algorithms for Exclusive Lasso Models

The exclusive lasso regularization based on the ℓ_1,2 norm has become po...
research
08/18/2011

Exact covariance thresholding into connected components for large-scale Graphical Lasso

We consider the sparse inverse covariance regularization problem or grap...
research
03/07/2015

Exact Hybrid Covariance Thresholding for Joint Graphical Lasso

This paper considers the problem of estimating multiple related Gaussian...
research
10/31/2017

Bayesian Learning of Random Graphs & Correlation Structure of Multivariate Data, with Distance between Graphs

We present a method for the simultaneous Bayesian learning of the correl...
research
06/30/2014

Sparse Recovery via Differential Inclusions

In this paper, we recover sparse signals from their noisy linear measure...

Please sign up or login with your details

Forgot password? Click here to reset