A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees

07/20/2013
by   Kshitij Khare, et al.
0

Sparse high dimensional graphical model selection is a topic of much interest in modern day statistics. A popular approach is to apply l1-penalties to either (1) parametric likelihoods, or, (2) regularized regression/pseudo-likelihoods, with the latter having the distinct advantage that they do not explicitly assume Gaussianity. As none of the popular methods proposed for solving pseudo-likelihood based objective functions have provable convergence guarantees, it is not clear if corresponding estimators exist or are even computable, or if they actually yield correct partial correlation graphs. This paper proposes a new pseudo-likelihood based graphical model selection method that aims to overcome some of the shortcomings of current methods, but at the same time retain all their respective strengths. In particular, we introduce a novel framework that leads to a convex formulation of the partial covariance regression graph problem, resulting in an objective function comprised of quadratic forms. The objective is then optimized via a coordinate-wise approach. The specific functional form of the objective function facilitates rigorous convergence analysis leading to convergence guarantees; an important property that cannot be established using standard results, when the dimension is larger than the sample size, as is often the case in high dimensional applications. These convergence guarantees ensure that estimators are well-defined under very general conditions, and are always computable. In addition, the approach yields estimators that have good large sample properties and also respect symmetry. Furthermore, application to simulated/real data, timing comparisons and numerical convergence is demonstrated. We also present a novel unifying framework that places all graphical pseudo-likelihood methods as special cases of a more general formulation, leading to important insights.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/09/2017

Maximum Regularized Likelihood Estimators: A General Prediction Theory and Applications

Maximum regularized likelihood estimators (MRLEs) are arguably the most ...
research
05/26/2016

A General Family of Trimmed Estimators for Robust High-dimensional Data Analysis

We consider the problem of robustifying high-dimensional structured esti...
research
08/27/2019

Model Selection With Graphical Neighbour Information

Accurate model selection is a fundamental requirement for statistical an...
research
12/23/2021

Consistency and asymptotic normality of covariance parameter estimators based on covariance approximations

For a zero-mean Gaussian random field with a parametric covariance funct...
research
02/10/2019

A Bayesian Approach to Joint Estimation of Multiple Graphical Models

The problem of joint estimation of multiple graphical models from high d...
research
05/31/2013

On model selection consistency of regularized M-estimators

Regularized M-estimators are used in diverse areas of science and engine...
research
03/20/2018

Graph-based regularization for regression problems with highly-correlated designs

Sparse models for high-dimensional linear regression and machine learnin...

Please sign up or login with your details

Forgot password? Click here to reset