Positive Definite Estimation of Large Covariance Matrix Using Generalized Nonconvex Penalties

04/15/2016 ∙ by Fei Wen, et al. ∙ 0

This work addresses the issue of large covariance matrix estimation in high-dimensional statistical analysis. Recently, improved iterative algorithms with positive-definite guarantee have been developed. However, these algorithms cannot be directly extended to use a nonconvex penalty for sparsity inducing. Generally, a nonconvex penalty has the capability of ameliorating the bias problem of the popular convex lasso penalty, and thus is more advantageous. In this work, we propose a class of positive-definite covariance estimators using generalized nonconvex penalties. We develop a first-order algorithm based on the alternating direction method framework to solve the nonconvex optimization problem efficiently. The convergence of this algorithm has been proved. Further, the statistical properties of the new estimators have been analyzed for generalized nonconvex penalties. Moreover, extension of this algorithm to covariance estimation from sketched measurements has been considered. The performances of the new estimators have been demonstrated by both a simulation study and a gene clustering example for tumor tissues. Code for the proposed estimators is available at https://github.com/FWen/Nonconvex-PDLCE.git.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.