Theoretical Characterization of the Generalization Performance of Overfitted Meta-Learning

04/09/2023
by   Peizhong Ju, et al.
0

Meta-learning has arisen as a successful method for improving training performance by training over many similar tasks, especially with deep neural networks (DNNs). However, the theoretical understanding of when and why overparameterized models such as DNNs can generalize well in meta-learning is still limited. As an initial step towards addressing this challenge, this paper studies the generalization performance of overfitted meta-learning under a linear regression model with Gaussian features. In contrast to a few recent studies along the same line, our framework allows the number of model parameters to be arbitrarily larger than the number of features in the ground truth signal, and hence naturally captures the overparameterized regime in practical deep meta-learning. We show that the overfitted min ℓ_2-norm solution of model-agnostic meta-learning (MAML) can be beneficial, which is similar to the recent remarkable findings on “benign overfitting” and “double descent” phenomenon in the classical (single-task) linear regression. However, due to the uniqueness of meta-learning such as task-specific gradient descent inner training and the diversity/fluctuation of the ground-truth signals among training tasks, we find new and interesting properties that do not exist in single-task linear regression. We first provide a high-probability upper bound (under reasonable tightness) on the generalization error, where certain terms decrease when the number of features increases. Our analysis suggests that benign overfitting is more significant and easier to observe when the noise and the diversity/fluctuation of the ground truth of each training task are large. Under this circumstance, we show that the overfitted min ℓ_2-norm solution can achieve an even lower generalization error than the underparameterized solution.

READ FULL TEXT
research
06/18/2022

Provable Generalization of Overparameterized Meta-learning Trained with SGD

Despite the superior empirical success of deep meta-learning, theoretica...
research
02/02/2020

Overfitting Can Be Harmless for Basis Pursuit: Only to a Degree

Recently, there have been significant interests in studying the generali...
research
03/06/2022

Is Bayesian Model-Agnostic Meta Learning Better than Model-Agnostic Meta Learning, Provably?

Meta learning aims at learning a model that can quickly adapt to unseen ...
research
02/12/2022

Relaxing the Feature Covariance Assumption: Time-Variant Bounds for Benign Overfitting in Linear Regression

Benign overfitting demonstrates that overparameterized models can perfor...
research
01/22/2021

Linear Regression with Distributed Learning: A Generalization Error Perspective

Distributed learning provides an attractive framework for scaling the le...
research
03/09/2021

On the Generalization Power of Overfitted Two-Layer Neural Tangent Kernel Models

In this paper, we study the generalization performance of min ℓ_2-norm o...
research
11/20/2021

Generating meta-learning tasks to evolve parametric loss for classification learning

The field of meta-learning has seen a dramatic rise in interest in recen...

Please sign up or login with your details

Forgot password? Click here to reset