DeepAI AI Chat
Log In Sign Up

Ledoit-Wolf linear shrinkage with unknown mean

by   Benoît Oriol, et al.

This work addresses large dimensional covariance matrix estimation with unknown mean. The empirical covariance estimator fails when dimension and number of samples are proportional and tend to infinity, settings known as Kolmogorov asymptotics. When the mean is known, Ledoit and Wolf (2004) proposed a linear shrinkage estimator and proved its convergence under those asymptotics. To the best of our knowledge, no formal proof has been proposed when the mean is unknown. To address this issue, we propose a new estimator and prove its quadratic convergence under the Ledoit and Wolf assumptions. Finally, we show empirically that it outperforms other standard estimators.


page 11

page 12


Optimal Linear Classification via Eigenvalue Shrinkage: The Case of Additive Noise

In this paper, we consider the general problem of testing the mean of tw...

Optimal shrinkage covariance matrix estimation under random sampling from elliptical distributions

This paper considers the problem of estimating a high-dimensional (HD) c...

Shrinking the eigenvalues of M-estimators of covariance matrix

A highly popular regularized (shrinkage) covariance matrix estimator is ...

Nonparametric Estimation and Inference in Psychological and Economic Experiments

The goal of this paper is to provide some statistical tools for nonparam...

The Local Ledoit-Peche Law

Ledoit and Peche proved convergence of certain functions of a random cov...

Cleaning large-dimensional covariance matrices for correlated samples

A non-linear shrinkage estimator of large-dimensional covariance matrice...

Tyler's Covariance Matrix Estimator in Elliptical Models with Convex Structure

We address structured covariance estimation in elliptical distributions ...