Log In Sign Up

Out-of-Distribution Generalization in Kernel Regression

by   Abdulkadir Canatar, et al.

In real word applications, data generating process for training a machine learning model often differs from what the model encounters in the test stage. Understanding how and whether machine learning models generalize under such distributional shifts have been a theoretical challenge. Here, we study generalization in kernel regression when the training and test distributions are different using methods from statistical physics. Using the replica method, we derive an analytical formula for the out-of-distribution generalization error applicable to any kernel and real datasets. We identify an overlap matrix that quantifies the mismatch between distributions for a given kernel as a key determinant of generalization performance under distribution shift. Using our analytical expressions we elucidate various generalization phenomena including possible improvement in generalization when there is a mismatch. We develop procedures for optimizing training and test distributions for a given data budget to find best and worst case generalizations under the shift. We present applications of our theory to real and synthetic datasets and for many kernels. We compare results of our theory applied to Neural Tangent Kernel with simulations of wide networks and show agreement. We analyze linear regression in further depth.


page 5

page 8

page 31


Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks

A fundamental question in modern machine learning is how deep neural net...

Statistical Mechanics of Generalization in Kernel Regression

Generalization beyond a training dataset is a main goal of machine learn...

Distributional Generalization: A New Kind of Generalization

We introduce a new notion of generalization – Distributional Generalizat...

Generalization Error of Generalized Linear Models in High Dimensions

At the heart of machine learning lies the question of generalizability o...

The Value of Out-of-Distribution Data

More data helps us generalize to a task. But real datasets can contain o...

Good linear classifiers are abundant in the interpolating regime

Within the machine learning community, the widely-used uniform convergen...

Domain Generalization by Marginal Transfer Learning

Domain generalization is the problem of assigning class labels to an unl...