Meta-Learning without Memorization

12/09/2019
by   Mingzhang Yin, et al.
19

The ability to learn new concepts with small amounts of data is a critical aspect of intelligence that has proven challenging for deep learning methods. Meta-learning has emerged as a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks. However, most meta-learning algorithms implicitly require that the meta-training tasks be mutually-exclusive, such that no single model can solve all of the tasks at once. For example, when creating tasks for few-shot image classification, prior work uses a per-task random assignment of image classes to N-way classification labels. If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes. This requirement means that the user must take great care in designing the tasks, for example by shuffling labels or removing task identifying information from the inputs. In some domains, this makes meta-learning entirely inapplicable. In this paper, we address this challenge by designing a meta-regularization objective using information theory that places precedence on data-driven adaptation. This causes the meta-learner to decide what must be learned from the task training data and what should be inferred from the task testing input. By doing so, our algorithm can successfully use data from non-mutually-exclusive tasks to efficiently adapt to novel tasks. We demonstrate its applicability to both contextual and gradient-based meta-learning algorithms, and apply it in practical settings where applying standard meta-learning has been difficult. Our approach substantially outperforms standard meta-learning algorithms in these settings.

READ FULL TEXT

page 4

page 20

page 21

page 22

research
03/04/2020

Meta Cyclical Annealing Schedule: A Simple Approach to Avoiding Meta-Amortization Error

The ability to learn new concepts with small amounts of data is a crucia...
research
12/18/2019

Continuous Meta-Learning without Tasks

Meta-learning is a promising strategy for learning to efficiently learn ...
research
01/24/2021

Meta-Regularization by Enforcing Mutual-Exclusiveness

Meta-learning models have two objectives. First, they need to be able to...
research
09/12/2019

Modular Meta-Learning with Shrinkage

Most gradient-based approaches to meta-learning do not explicitly accoun...
research
01/02/2020

DAWSON: A Domain Adaptive Few Shot Generation Framework

Training a Generative Adversarial Networks (GAN) for a new domain from s...
research
09/02/2020

Yet Meta Learning Can Adapt Fast, It Can Also Break Easily

Meta learning algorithms have been widely applied in many tasks for effi...
research
06/14/2020

Graph Meta Learning via Local Subgraphs

Prevailing methods for graphs require abundant label and edge informatio...

Please sign up or login with your details

Forgot password? Click here to reset