Transfer Meta-Learning: Information-Theoretic Bounds and Information Meta-Risk Minimization

11/04/2020
by   Sharu Theresa Jose, et al.
0

Meta-learning automatically infers an inductive bias by observing data from a number of related tasks. The inductive bias is encoded by hyperparameters that determine aspects of the model class or training algorithm, such as initialization or learning rate. Meta-learning assumes that the learning tasks belong to a task environment, and that tasks are drawn from the same task environment both during meta-training and meta-testing. This, however, may not hold true in practice. In this paper, we introduce the problem of transfer meta-learning, in which tasks are drawn from a target task environment during meta-testing that may differ from the source task environment observed during meta-training. Novel information-theoretic upper bounds are obtained on the transfer meta-generalization gap, which measures the difference between the meta-training loss, available at the meta-learner, and the average loss on meta-test data from a new, randomly selected, task in the target task environment. The first bound, on the average transfer meta-generalization gap, captures the meta-environment shift between source and target task environments via the KL divergence between source and target data distributions. The second, PAC-Bayesian bound, and the third, single-draw bound, account for this shift via the log-likelihood ratio between source and target task distributions. Furthermore, two transfer meta-learning solutions are introduced. For the first, termed Empirical Meta-Risk Minimization (EMRM), we derive bounds on the average optimality gap. The second, referred to as Information Meta-Risk Minimization (IMRM), is obtained by minimizing the PAC-Bayesian bound. IMRM is shown via experiments to potentially outperform EMRM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/21/2021

An Information-Theoretic Analysis of the Impact of Task Similarity on Meta-Learning

Meta-learning aims at optimizing the hyperparameters of a model class or...
research
10/13/2020

Information-Theoretic Bounds on Transfer Generalization Gap Based on Jensen-Shannon Divergence

In transfer learning, training and testing data sets are drawn from diff...
research
06/20/2021

Transfer Bayesian Meta-learning via Weighted Free Energy Minimization

Meta-learning optimizes the hyperparameters of a training procedure, suc...
research
10/21/2020

Conditional Mutual Information Bound for Meta Generalization Gap

Meta-learning infers an inductive bias—typically in the form of the hype...
research
07/12/2022

An Information-Theoretic Analysis for Transfer Learning: Error Bounds and Applications

Transfer learning, or domain adaptation, is concerned with machine learn...
research
10/14/2020

Theoretical bounds on estimation error for meta-learning

Machine learning models have traditionally been developed under the assu...
research
05/28/2018

Model averaging for robust extrapolation in evidence synthesis

Extrapolation from a source to a target, e.g., from adults to children, ...

Please sign up or login with your details

Forgot password? Click here to reset