Regularization of the Kernel Matrix via Covariance Matrix Shrinkage Estimation

07/19/2017
by   Tomer Lancewicki, et al.
0

The kernel trick concept, formulated as an inner product in a feature space, facilitates powerful extensions to many well-known algorithms. While the kernel matrix involves inner products in the feature space, the sample covariance matrix of the data requires outer products. Therefore, their spectral properties are tightly connected. This allows us to examine the kernel matrix through the sample covariance matrix in the feature space and vice versa. The use of kernels often involves a large number of features, compared to the number of observations. In this scenario, the sample covariance matrix is not well-conditioned nor is it necessarily invertible, mandating a solution to the problem of estimating high-dimensional covariance matrices under small sample size conditions. We tackle this problem through the use of a shrinkage estimator that offers a compromise between the sample covariance matrix and a well-conditioned matrix (also known as the "target") with the aim of minimizing the mean-squared error (MSE). We propose a distribution-free kernel matrix regularization approach that is tuned directly from the kernel matrix, avoiding the need to address the feature space explicitly. Numerical simulations demonstrate that the proposed regularization is effective in classification tasks.

READ FULL TEXT
research
07/27/2017

Sequential Inverse Approximation of a Regularized Sample Covariance Matrix

One of the goals in scaling sequential machine learning methods pertains...
research
05/15/2011

Spectrum Sensing for Cognitive Radio Using Kernel-Based Learning

Kernel method is a very powerful tool in machine learning. The trick of ...
research
08/07/2020

Outlier detection in non-elliptical data by kernel MRCD

The minimum regularized covariance determinant method (MRCD) is a robust...
research
09/03/2021

Regularized tapered sample covariance matrix

Covariance matrix tapers have a long history in signal processing and re...
research
07/17/2021

Sparse Bayesian Learning with Diagonal Quasi-Newton Method For Large Scale Classification

Sparse Bayesian Learning (SBL) constructs an extremely sparse probabilis...
research
06/20/2023

Minimum Eigenvalue Based Covariance Matrix Estimation with Limited Samples

In this paper, we consider the interference rejection combining (IRC) re...
research
02/20/2020

Efficiently updating a covariance matrix and its LDL decomposition

Equations are presented which efficiently update or downdate the covaria...

Please sign up or login with your details

Forgot password? Click here to reset