DeepAI AI Chat
Log In Sign Up

An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss

by   Cheng Wang, et al.

The estimation of high dimensional precision matrices has been a central topic in statistical learning. However, as the number of parameters scales quadratically with the dimension p, many state-of-the-art methods do not scale well to solve problems with a very large p. In this paper, we propose a very efficient algorithm for precision matrix estimation via penalized quadratic loss functions. Under the high dimension low sample size setting, the computation complexity of our algorithm is linear in the sample size and the number of parameters, which is the same as computing the sample covariance matrix.


page 1

page 2

page 3

page 4


MARS: A second-order reduction algorithm for high-dimensional sparse precision matrices estimation

Estimation of the precision matrix (or inverse covariance matrix) is of ...

A Fast Iterative Algorithm for High-dimensional Differential Network

Differential network is an important tool to capture the changes of cond...

Penalized Interaction Estimation for Ultrahigh Dimensional Quadratic Regression

Quadratic regression goes beyond the linear model by simultaneously incl...

Adaptive Regularization for Weight Matrices

Algorithms for learning distributions over weight-vectors, such as AROW ...

Mesh-Based Solutions for Nonparametric Penalized Regression

It is often of interest to estimate regression functions non-parametrica...

Kendall's tau in high-dimensional genomic parsimony

High-dimensional data models, often with low sample size, abound in many...

Multilevel approximation of Gaussian random fields: Covariance compression, estimation and spatial prediction

Centered Gaussian random fields (GRFs) indexed by compacta such as smoot...