Conditional Mutual Information Bound for Meta Generalization Gap

10/21/2020
by   Arezou Rezazadeh, et al.
0

Meta-learning infers an inductive bias—typically in the form of the hyperparameters of a base-learning algorithm—by observing data from a finite number of related tasks. This paper presents an information-theoretic upper bound on the average meta-generalization gap that builds on the conditional mutual information (CMI) framework of Steinke and Zakynthinou (2020), which was originally developed for conventional learning. In the context of meta-learning, the CMI framework involves a training meta-supersample obtained by first sampling 2N independent tasks from the task environment, and then drawing 2M independent training samples for each sampled task. The meta-training data fed to the meta-learner is then obtained by randomly selecting N tasks from the available 2N tasks and M training samples per task from the available 2M training samples per task. The resulting bound is explicit in two CMI terms, which measure the information that the meta-learner output and the base-learner output respectively provide about which training data are selected given the entire meta-supersample.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/09/2020

Information-Theoretic Generalization Bounds for Meta-Learning and Applications

Meta-learning, or "learning to learn", refers to techniques that infer a...
research
01/21/2021

An Information-Theoretic Analysis of the Impact of Task Similarity on Meta-Learning

Meta-learning aims at optimizing the hyperparameters of a model class or...
research
06/22/2020

Siamese Meta-Learning and Algorithm Selection with 'Algorithm-Performance Personas' [Proposal]

Automated per-instance algorithm selection often outperforms single lear...
research
10/19/2021

BAMLD: Bayesian Active Meta-Learning by Disagreement

Data-efficient learning algorithms are essential in many practical appli...
research
11/04/2020

Transfer Meta-Learning: Information-Theoretic Bounds and Information Meta-Risk Minimization

Meta-learning automatically infers an inductive bias by observing data f...
research
10/12/2022

Evaluated CMI Bounds for Meta Learning: Tightness and Expressiveness

Recent work has established that the conditional mutual information (CMI...
research
09/07/2020

Information Theoretic Meta Learning with Gaussian Processes

We formulate meta learning using information theoretic concepts such as ...

Please sign up or login with your details

Forgot password? Click here to reset