Generalization Bounds For Meta-Learning: An Information-Theoretic Analysis

09/29/2021
by   Qi Chen, et al.
0

We derive a novel information-theoretic analysis of the generalization property of meta-learning algorithms. Concretely, our analysis proposes a generic understanding of both the conventional learning-to-learn framework and the modern model-agnostic meta-learning (MAML) algorithms. Moreover, we provide a data-dependent generalization bound for a stochastic variant of MAML, which is non-vacuous for deep few-shot learning. As compared to previous bounds that depend on the square norm of gradients, empirical validations on both simulated data and a well-known few-shot benchmark show that our bound is orders of magnitude tighter in most situations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/09/2020

Information-Theoretic Generalization Bounds for Meta-Learning and Applications

Meta-learning, or "learning to learn", refers to techniques that infer a...
research
09/07/2020

Information Theoretic Meta Learning with Gaussian Processes

We formulate meta learning using information theoretic concepts such as ...
research
10/14/2020

Theoretical bounds on estimation error for meta-learning

Machine learning models have traditionally been developed under the assu...
research
02/05/2021

Learning While Dissipating Information: Understanding the Generalization Capability of SGLD

Understanding the generalization capability of learning algorithms is at...
research
02/21/2018

Generalization in Machine Learning via Analytical Learning Theory

This paper introduces a novel measure-theoretic learning theory to analy...
research
06/14/2020

Graph Meta Learning via Local Subgraphs

Prevailing methods for graphs require abundant label and edge informatio...
research
06/08/2022

Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning

Model-agnostic meta learning (MAML) is currently one of the dominating a...

Please sign up or login with your details

Forgot password? Click here to reset