On Kernel Regression with Data-Dependent Kernels

09/04/2022
by   James B. Simon, et al.
0

The primary hyperparameter in kernel regression (KR) is the choice of kernel. In most theoretical studies of KR, one assumes the kernel is fixed before seeing the training data. Under this assumption, it is known that the optimal kernel is equal to the prior covariance of the target function. In this note, we consider KR in which the kernel may be updated after seeing the training data. We point out that an analogous choice of kernel using the posterior of the target function is optimal in this setting. Connections to the view of deep neural networks as data-dependent kernel learners are discussed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2020

The Statistical Cost of Robust Kernel Hyperparameter Tuning

This paper studies the statistical complexity of kernel hyperparameter t...
research
08/01/2018

Just Interpolate: Kernel "Ridgeless" Regression Can Generalize

In the absence of explicit regularization, Kernel "Ridgeless" Regression...
research
02/07/2020

Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks

A fundamental question in modern machine learning is how deep neural net...
research
10/29/2021

Neural Networks as Kernel Learners: The Silent Alignment Effect

Neural networks in the lazy training regime converge to kernel machines....
research
06/17/2020

Kernel Alignment Risk Estimator: Risk Prediction from Training Data

We study the risk (i.e. generalization error) of Kernel Ridge Regression...
research
08/13/2018

Kernel Flows: from learning kernels from data into the abyss

Learning can be seen as approximating an unknown function by interpolati...
research
10/19/2018

Exchangeability and Kernel Invariance in Trained MLPs

In the analysis of machine learning models, it is often convenient to as...

Please sign up or login with your details

Forgot password? Click here to reset