Meta-Learning without Memorization

12/09/2019 ∙ by Mingzhang Yin, et al. ∙ Google The University of Texas at Austin berkeley college Stanford University 19

The ability to learn new concepts with small amounts of data is a critical aspect of intelligence that has proven challenging for deep learning methods. Meta-learning has emerged as a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks. However, most meta-learning algorithms implicitly require that the meta-training tasks be mutually-exclusive, such that no single model can solve all of the tasks at once. For example, when creating tasks for few-shot image classification, prior work uses a per-task random assignment of image classes to N-way classification labels. If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes. This requirement means that the user must take great care in designing the tasks, for example by shuffling labels or removing task identifying information from the inputs. In some domains, this makes meta-learning entirely inapplicable. In this paper, we address this challenge by designing a meta-regularization objective using information theory that places precedence on data-driven adaptation. This causes the meta-learner to decide what must be learned from the task training data and what should be inferred from the task testing input. By doing so, our algorithm can successfully use data from non-mutually-exclusive tasks to efficiently adapt to novel tasks. We demonstrate its applicability to both contextual and gradient-based meta-learning algorithms, and apply it in practical settings where applying standard meta-learning has been difficult. Our approach substantially outperforms standard meta-learning algorithms in these settings.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 20

page 21

page 22

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The ability to learn new concepts and skills with small amounts of data is a critical aspect of intelligence that many machine learning systems lack. Meta-learning 

[29] has emerged as a promising approach for enabling systems to quickly learn new tasks by building upon experience from previous related tasks [32, 19, 28, 27, 8]. Meta-learning accomplishes this by explicitly optimizing for few-shot generalization across a set of meta-training tasks. The meta-learner is trained such that, after being presented with a small task training set, it can accurately make predictions on test datapoints for that meta-training task.

While these methods have shown promising results, current methods require careful design of the meta-training tasks to prevent a subtle form of task overfitting

, distinct from standard overfitting in supervised learning. If the task can be accurately inferred from the test input alone, then the task training data can be ignored while still achieving low meta-training loss. In effect, the model will collapse to one that makes zero-shot decisions. This presents an opportunity for overfitting where the meta-learner generalizes on meta-training tasks, but fails to adapt when presented with training data from novel tasks. We call this form of overfitting the

memorization problem in meta-learning because the meta-learner memorizes a function that solves all of the meta-training tasks, rather than learning to adapt.

Existing meta-learning algorithms implicitly resolve this problem by carefully designing the meta-training tasks such that no single model can solve all tasks zero-shot; we call tasks constructed in this way mutually-exclusive. For example, for -way classification, each task consists of examples from randomly sampled classes. The classes are labeled from to , and critically, for each task, we randomize the assignment of classes to labels (visualized in Appendix Figure 4). This ensures that the task-specific class-to-label assignment cannot be inferred from a test input alone. However, the mutually-exclusive tasks requirement places a substantial burden on the user to cleverly design the meta-training setup (e.g., by shuffling labels or omitting goal information). While shuffling labels provides a reasonable mechanism to force tasks to be mutually-exclusive with standard few-shot image classification datasets such as MiniImageNet [27], this solution cannot be applied to all domains where we would like to utilize meta-learning. For example, consider meta-learning a pose predictor that can adapt to different objects: even if different objects are used for meta-training, a powerful model can simply learn to ignore the training set for each task, and directly learn to predict the pose of each of the objects. However, such a model would not be able to adapt to new objects at meta-test time.

The primary contributions of this work are: 1) to identify and formalize the memorization problem in meta-learning, and 2) to propose an meta-regularizer (MR) using information theory as a general approach for mitigating this problem without placing restrictions on the task distribution. We formally differentiate the meta-learning memorization problem from overfitting problem in conventional supervised learning, and empirically show that naïve applications of standard regularization techniques do not solve the memorization problem in meta-learning. The key insight of our meta-regularization approach is that the model acquired when memorizing tasks is more complex than the model that results from task-specific adaptation because the memorization model is a single model that simultaneously performs well on all tasks. It needs to contain all information in its weights needed to do well on test points without looking at training points. Therefore we would expect the information content of the weights of a memorization model to be larger, and hence the model should be more complex. As a result, we propose an objective that regularizes the information complexity of the meta-learned function class (motivated by Alemi et al. [2], Achille & Soatto [1]). Furthermore, we show that meta-regularization in MAML can be rigorously motivated by a PAC-Bayes bound on generalization. In a series of experiments on non-mutually-exclusive task distributions entailing both few-shot regression and classification, we find that memorization poses a significant challenge for both gradient-based [8] and contextual [11] meta-learning methods, resulting in near random performance on test tasks in some cases. Our meta-regularization approach enables both of these methods to achieve efficient adaptation and generalization, leading to substantial performance gains across the board on non-mutually-exclusive tasks.

2 Preliminaries

We focus on the standard supervised meta-learning problem (see, e.g., Finn et al. [8]). Briefly, we assume tasks are sampled from a task distribution . During meta-training, for each task, we observe a set of training data and a set of test data with sampled from , and similarly for . We denote the entire meta-training set as . The goal of meta-training is to learn a model for a new task by leveraging what is learned during meta-training and a small amount of training data for the new task . We use to denote the meta-parameters learned during meta-training and use to denote the task-specific parameters that are computed based on the task training data.

Following Grant et al. [14], Gordon et al. [13], given a meta-training set , we consider meta-learning algorithms that maximize conditional likelihood , which is composed of three distributions: that summarizes meta-training data into a distribution on meta-parameters, that summarizes the per-task training set into a distribution on task-specific parameters, and that is the predictive distribution. These distributions are learned to minimize

(1)

For example, in MAML [8], and are the weights of a predictor network, is a delta function learned over the meta-training data, is a delta function centered at a point defined by gradient optimization, and parameterizes the predictor network  [14]. In particular, to determine the task-specific parameters , the task training data and are used in the predictor model

Another family of meta-learning algorithms are contextual methods [28], such as conditional neural processes (CNP) [12, 11]. CNP instead defines as a mapping from to a summary statistic (parameterized by ). In particular, is the output of an aggregator applied to features extracted from the task training data. Then parameterizes a predictor network that takes and as input and produces a predictive distribution .

In the following sections, we describe a common pitfall for a variety of meta-learning algorithms, including MAML and CNP, and a general meta-regularization approach to prevent this pitfall.

3 The Memorization Problem in Meta-Learning

The ideal meta-learning algorithm will learn in such a way that generalizes to novel tasks. However, we find that unless tasks are carefully designed, current meta-learning algorithms can overfit to the tasks and end up ignoring the task training data (i.e., either does not depend on or does not depend on , as shown in Figure 1), which can lead to poor generalization. This memorization phenomenon is best understood through examples.

Consider a 3D object pose prediction problem (illustrated in Figure 1), where each object has a fixed canonical pose. The pairs for the task are 2D grey-scale images of the rotated object and the rotation angle relative to the fixed canonical pose for that object . In the most extreme case, for an unseen object, the task is impossible without using because the canonical pose for the unseen object is unknown. The number of objects in the meta-training dataset is small, so it is straightforward for a single network to memorize the canonical pose for each training object and to infer the object from the input image (i.e., task overfitting), thus achieving a low training error without using . However, by construction, this solution will necessarily have poor generalization to test tasks with unseen objects.

As another example, imagine an automated medical prescription system that suggests medication prescriptions to doctors based on patient symptoms and the patient’s previous record of prescription responses (i.e., medical history) for adaptation. In the meta-learning framework, each patient represents a separate task. Here, the symptoms and prescriptions have a close relationship, so we cannot assign random prescriptions to symptoms, in contrast to the classification tasks where we can randomly shuffle the labels to create mutually-exclusiveness. For this non-mutually-exclusive task distribution, a standard meta-learning system can memorize the patients’ identity information in the training, leading it to ignore the medical history and only utilize the symptoms combined with the memorized information. As a result, it may issue highly accurate prescriptions on the meta-training set, but fail to adapt to new patients effectively. While such a system would achieve a baseline level of accuracy for new patients, it would be no better than a standard supervised learning method applied to the pooled data.

We formally define (complete) memorization as:

Definition 1 (Complete Meta-Learning Memorization).

Complete memorization in meta-learning is when the learned model ignores the task training data such that (i.e., ).

Memorization describes an issue with overfitting the meta-training tasks, but it does not preclude the network from generalizing to unseen pairs on the tasks similar to the training tasks. Memorization becomes an undesired problem for generalization to new tasks when (i.e., the task training data is necessary to achieve good performance, even with exact inference under the data generating distribution, to make accurate predictions).

A model with the memorization problem may generalize to new datapoints in training tasks but cannot generalize to novel tasks, which distinguishes it from typical overfitting in supervised learning. In practice, we find that MAML and CNP frequently converge to this memorization solution (Table 2). For MAML, memorization can occur when a particular setting of that does not adapt to the task training data can achieve comparable meta-training error to a solution that adapts . For example, if a setting of can solve all of the meta-training tasks (i.e., for all in and the predictive error is close to zero), the optimization may converge to a stationary point of the MAML objective where minimal adaptation occurs based on the task training set (i.e., ). For a novel task where it is necessary to use the task training data, MAML can in principle still leverage the task training data because the adaptation step is based on gradient descent. However, in practice, the poor initialization of can affect the model’s ability to generalize from a small mount of data. For CNP, memorization can occur when the predictive distribution network can achieve low training error without using the task training summary statistics . On a novel task, the network is not trained to use , so it is unable to use the information extracted from the task training set to effectively generalize.

In some problem domains, the memorization problem can be avoided by carefully constructing the tasks. For example, for -way classification, each task consists of examples from randomly sampled classes. If the classes are assigned to a random permutation of for each task, this ensures that the task-specific class-to-label assignment cannot be inferred from the test inputs alone. As a result, a model that ignores the task training data cannot achieve low training error, preventing convergence to the memorization problem. We refer to tasks constructed in this way as mutually-exclusive. However, the mutually-exclusive tasks requirement places a substantial burden on the user to cleverly design the meta-training setup (e.g., by shuffling labels or omitting goal information) and cannot be applied to all domains where we would like to utilize meta-learning.

Figure 1: Left: An example of non-mutually-exclusive pose prediction tasks, which may lead to the memorization problem. The training tasks are non-mutually-exclusive because the test data label (right) can be inferred accurately without using task training data (left) in the training tasks, by memorizing the canonical orientation of the meta-training objects. For a new object and canonical orientation (bottom), the task cannot be solved without using task training data (bottom left) to infer the canonical orientation. Right: Graphical model for meta-learning. Observed variables are shaded. Without either one of the dashed arrows, is conditionally independent of given and , which we refer to as complete memorization (Definition 1).

4 Meta Regularization Using Information Theory

At a high level, the sources of information in the predictive distribution come from the input, the meta-parameters, and the data. The memorization problem occurs when the model encodes task information in the predictive network that is readily available from the task training set (i.e., it memorizes the task information for each meta-training task). We could resolve this problem by encouraging the model to minimize the training error and to rely on the task training dataset as much as possible for the prediction of (i.e., to maximize ). Explicitly maximizing requires an intractable marginalization over task training sets to compute . Instead, we can implicitly encourage it by restricting the information flow from other sources ( and ) to . To achieve both low error and low mutual information between and , the model must use task training data to make predictions, hence increasing the mutual information , leading to reduced memorization. In this section, we describe two tractable ways to achieve this.

4.1 Meta Regularization on Activations

Given , the statistical dependency between and is controlled by the direct path from to and the indirect path through (see Figure 1), where the latter is desirable because it leverages the task training data. We can control the information flow between and by introducing an intermediate stochastic bottleneck variable such that  [2] as shown in Figure 5. Now, we would like to maximize to prevent memorization. We can lower bound this mutual information by

(2)

where is a variational approximation to the marginal, the first inequality follows from the statistical dependencies in our model (see Figure 5 and Appendix A.2 for the proof), and we use the fact that is conditionally independent of given and . By simultaneously minimizing and maximizing the mutual information , we can implicitly encourage the model to use the task training data .

For non-mutually exclusive problems, the true label is dependent on . Hence if (i.e., the prediction is independent of given the task training data and ) the predictive likelihood will be low. This suggests replacing the maximization of with minimization of the training loss in Eq. (2), resulting in the following regularized training objective

(3)

where modulates the regularizer and we set as . We refer to this regularizer as meta-regularization (MR) on the activations.

As we demonstrate in Section 6, we find that this regularizer performs well, but in some cases can fail to prevent the memorization problem. Our hypothesis is that in these cases, the network can sidestep the information constraint by storing the prediction of in a part of , which incurs only a small penalty.

4.2 Meta Regularization on Weights

Alternatively, we can penalize the task information stored in the meta-parameters . Here, we provide an informal argument and provide the complete argument in Appendix A.3. Analogous to the supervised setting [1], given meta-training dataset , we consider

as random variable where the randomness can be introduced by training stochasticity. We model the stochasticity over

with a Gaussian distribution

with learned mean and variance parameters per dimension 

[4, 1]. By penalizing , we can limit the information about the training tasks stored in the meta-parameters and thus require the network to use the task training data to make accurate predictions. We can tractably upper bound it by

(4)

where is a variational approximation to the marginal, which we set to . In practice, we apply meta-regularization to the meta-parameters that are not used to adapt to the task training data and denote the other parameters as . In this way, we control the complexity of the network that can predict the test labels without using task training data, but we do not limit the complexity of the network that processes the task training data. Our final meta-regularized objective can be written as

(5)

For MAML, we apply meta-regularization to the parameters uninvolved in the task adaptation. For CNP, we apply meta-regularization to the encoder parameters. The detailed algorithms are shown in Algorithm 1 and 2 in the appendix.

4.3 Does Meta Regularization Lead to Better Generalization?

Now that we have derived meta regularization approaches for mitigating the memorization problem, we theoretically analyze whether meta regularization leads to better generalization via a PAC-Bayes bound. In particular, we study meta regularization (MR) on the weights (W) of MAML, i.e. MR-MAML (W), as a case study.

Meta regularization on the weights of MAML uses a Gaussian distribution to model the stochasticity in the weights. Given a task and task training data, the expected error is given by

(6)

where the prediction loss is bounded111In practice, is MSE on a bounded target space or classification accuracy. We optimize the negative log-likelihood as a bound on the 0-1 loss.. Then, we would like to minimize the error on novel tasks

(7)

We only have a finite sample of training tasks, so computing

is intractable, but we can form an empirical estimate:

(8)

where for exposition we have assumed are the same for all tasks. We would like to relate and , but the challenge is that and are derived from the meta-training tasks . There are two sources of generalization error: (i) error due to the finite number of observed tasks and (ii) error due to the finite number of examples observed per task. Closely following the arguments in [3], we apply a standard PAC-Bayes bound to each of these and combine the results with a union bound, resulting in the following Theorem.

Theorem 1.

Let be an arbitrary prior distribution over that does not depend on the meta-training data. Then for any

, with probability at least

, the following inequality holds uniformly for all choices of and ,

(9)

where is the number of meta-training tasks and is the number of per-task validation datapoints.

We defer the proof to the Appendix A.4. The key difference from the result in [3] is that we leverage the fact that the task training data is split into training and validation.

In practice, we set . If we can achieve a low value for the bound, then with high probability, our test error will also be low. As shown in the Appendix A.4, by a first order Taylor expansion of the the second term of the RHS in Eq.(9) and setting the coefficient of the KL term as , we recover the MR-MAML(W) objective (Eq.(5)). trades-off between the tightness of the generalization bound and the probability that it holds true. The result of this bound suggests that the proposed meta-regularization on weights does indeed improve generalization on the meta-test set.

5 Related Work

Previous works have developed approaches for mitigating various forms of overfitting in meta-learning. These approaches aim to improve generalization in several ways: by reducing the number of parameters that are adapted in MAML [39], by compressing the task embedding [22], through data augmentation from a GAN [38], by using an auxiliary objective on task gradients [15], and via an entropy regularization objective [17]. These methods all focus on the setting with mutually-exclusive task distributions. We instead recognize and formalize the memorization problem, a particular form of overfitting that manifests itself with non-mutually-exclusive tasks, and offer a general and principled solution. Unlike prior methods, our approach is applicable to both contextual and gradient-based meta-learning methods. We additionally validate that prior regularization approaches, namely TAML [17], are not effective for addressing this problem setting.

Our derivation uses a Bayesian interpretation of meta-learning [31, 7, 6, 14, 13, 9, 18, 16]. Some Bayesian meta-learning approaches place a distributional loss on the inferred task variables to constrain them to a prior distribution [12, 13, 26], which amounts to an information bottleneck on the latent task variables. Similarly Zintgraf et al. [39], Lee et al. [22], Guiroy et al. [15] aim to produce simpler or more compressed task adaptation processes. Our approach does the opposite, penalizing information from the inputs and parameters, to encourage the task-specific variables to contain greater information driven by the per-task data.

We use PAC-Bayes theory to study the generalization error of meta-learning and meta-regularization. Pentina & Lampert [25] extends the single task PAC-Bayes bound [23] to the multi-task setting, which quantifies the gap between empirical error on training tasks and the expected error on new tasks. More recent research shows that, with tightened generalization bounds as the training objective, the algorithms can reduce the test error for mutually-exclusive tasks [10, 3]. Our analysis is different from these prior works in that we only include pre-update meta parameters in the generalization bound rather than both pre-update and post-update parameters. In the derivation, we also explicitly consider the splitting of data into the task training set and task validation set, which is aligned with the practical setting.

The memorization problem differs from overfitting in conventional supervised learning in several aspects. First, memorization occurs at the task level rather than datapoint level and the model memorizes functions rather than labels. In particular, within a training task, the model can generalize to new datapoints, but it fails to generalize to new tasks. Second, the source of information for achieving generalization is different. For meta-learning the information is from both the meta-training data and new task training data but in standard supervised setting the information is only from training data. Finally, the aim of regularization is different. In the conventional supervised setting, regularization methods such as weight decay [20], dropout [30], the information bottleneck [34, 33], and Bayes-by-Backprop [4] are used to balance the network complexity and the information in the data. The aim of meta-regularization is different. It governs the model complexity to avoid one complex model solving all tasks, while allowing the model’s dependency on the task data to be complex. We further empirically validate this difference, finding that standard regularization techniques do not solve the memorization problem.

6 Experiments

In the experimental evaluation, we aim to answer the following questions: (1) How prevalent is the memorization problem across different algorithms and domains? (2) How does the memorization problem affect the performance of algorithms on non-mutually-exclusive task distributions? (3) Is our meta-regularization approach effective for mitigating the problem and is it compatible with multiple types of meta-learning algorithms? (4) Is the problem of memorization empirically distinct from that of the standard overfitting problem?

To answer these questions, we propose several meta-learning problems involving non-mutually-exclusive task distributions, including two problems that are adapted from prior benchmarks with mutually-exclusive task distributions. We consider model-agnostic meta-learning (MAML) and conditional neural processes (CNP) as representative meta-learning algorithms. We study both variants of our method in combination with MAML and CNP. When comparing with meta-learning algorithms with and without meta-regularization, we use the same neural network architecture, while other hyperparameters are tuned via cross-validation per-problem.

6.1 Sinusoid Regression

First, we consider a toy sinusoid regression problem that is non-mutually-exclusive. The data for each task is created in the following way: the amplitude of the sinusoid is uniformly sampled from a set of equally-spaced points ; is sampled uniformly from and is sampled from . We provide both and the amplitude

(as a one-hot vector) as input, i.e.

. At the test time, we expand the range of the tasks by randomly sampling the data-generating amplitude uniformly from and use a random one-hot vector for the input to the network. The meta-training tasks are a proper subset of the meta-test tasks.

Without the additional amplitude input, both MAML and CNP can easily solve the task and generalize to the meta-test tasks. However, once we add the additional amplitude input which indicates the task identity, we find that both MAML and CNP converge to the complete memorization solution and fail to generalize well to test data (Table 1 and Appendix Figures 7 and 8). Both meta-regularized MAML and CNP (MR-MAML) and (MR-CNP) instead converge to a solution that adapts to the data, and as a result, greatly outperform the unregularized methods.

Methods MAML
MR-MAML (A)
(ours)
MR-MAML (W)
(ours)
CNP
MR-CNP (A)
(ours)
MR-CNP (W)
(ours)
5 shot 0.46 (0.04) 0.17 (0.03) 0.16 (0.04) 0.91 (0.10) 0.10 (0.01) 0.11 (0.02)
10 shot 0.13 (0.01) 0.07 (0.02) 0.06 (0.01) 0.92 (0.05) 0.09 (0.01) 0.09 (0.01)
Table 1:

Test MSE for the non-mutually-exclusive sinusoid regression problem. We compare MAML and CNP against meta-regularized MAML (MR-MAML) and meta-regularized CNP (MR-CNP) where regularization is either on the activations (A) or the weights (W). We report the mean over 5 trials and the standard deviation in parentheses.

Figure 2: Test MSE on the mutually-non-exclusive sinusoid problem as function of the number of gradient steps used in the inner loop of MAML and MR-MAML. For each trial, we calculate the mean MSE over 100 randomly generated meta-testing tasks. We report the mean and standard deviation over 5 random trials.

6.2 Pose Prediction

To illustrate the memorization problem on a more realistic task, we create a multi-task regression dataset based on the Pascal 3D data [37] (See Appendix A.5.1 for a complete description). We randomly select 50 objects for meta-training and the other 15 objects for meta-testing. For each object, we use MuJoCo [35] to render images with random orientations of the instance on a table, visualized in Figure 1. For the meta-learning algorithm, the observation is the gray-scale image and the label is the orientation relative to a fixed canonical pose. Because the number of objects in the meta-training dataset is small, it is straightforward for a single network to memorize the canonical pose for each training object and to infer the orientation from the input image, thus achieving a low meta-training error without using . However, this solution performs poorly on the meta-test set because it has not seen the novel objects and their canonical poses.

Optimization modes and hyperparameter sensitivity. We choose the learning rate from for each method, from for meta-regularization and report the results with the best hyperparameters (as measured on the meta-validation set) for each method. In this domain, we find that the convergence point of the meta-learning algorithm is determined by both the optimization landscape of the objective and the training dynamics, which vary due to stochastic gradients and the random initialization. In particular, we observe that there are two modes of the objective, one that corresponds to complete memorization and one that corresponds to successful adaptation to the task data. As illustrated in the Appendix, we find that models that converge to a memorization solution have lower training error than solutions which use the task training data, indicating a clear need for meta-regularization. When the meta-regularization is on the activations, the solution that the algorithms converge to depends on the learning rate, while MR on the weights consistently converges to the adaptation solution (See Appendix Figure 9 for the sensitivity analysis). This suggests that MR on the activations is not always successful at preventing memorization. Our hypothesis is that there exists a solution in which the bottlenecked activations encode only the prediction , and discard other information. Such a solution can achieve both low training MSE and low regularization loss without using task training data, particularly if the predicted label contains a small number of bits (i.e., because the activations will have low information complexity). However, note that this solution does not achieve low regularization error when applying MR to the weights because the function needed to produce the predicted label does not have low information complexity. As a result, meta-regularization on the weights does not suffer from this pathology and is robust to different learning rates. Therefore, we will use regularization on weights as the proposed methodology in the following experiments and algorithms in Appendix A.1.

Quantitative results. We compare MAML and CNP with their meta-regularized versions (Table 2). We additionally include fine-tuning as baseline, which trains a single network on all the instances jointly, and then fine-tunes on the task training data. Meta-learning with meta-regularization (on weights) outperforms all competing methods by a large margin. We show test error as a function of the meta-regularization coefficient in Appendix Figure 3. The curve reflects the trade-off when changing the amount of information contained in the weights. This indicates that gives a knob that allows us to tune the degree to which the model uses the data to adapt versus relying on the prior.

Figure 3: The performance of MAML and CNP with meta-regularization on the weights, as a function of the regularization strength . We observe provides us a knob with which we can control the degree to which the algorithm adapts versus memorizes. When is small, we observe memorization, leading to large test error; when is too large, the network does not store enough information in the weights to perform the task. Crucially, in the middle of these two extremes, meta-regularization is effective in inducing adaptation, leading to good generalization. The plot shows the mean and standard deviation across meta-training runs.
Method MAML
MR-MAML (W)
(ours)
CNP
MR-CNP (W)
(ours)
FT FT + Weight Decay
MSE 5.39 (1.31) 2.26 (0.09) 8.48 (0.12) 2.89 (0.18) 7.33 (0.35) 6.16 (0.12)
Table 2: Meta-test MSE for the pose prediction problem. We compare MR-MAML (ours) with conventional MAML and fine-tuning (FT). We report the average over 5 trials and standard deviation in parentheses.

Comparison to standard regularization. We compare our meta-regularization with standard regularization techniques, weight decay [20] and Bayes-by-Backprop [4], in Table 3. We observe that simply applying standard regularization to all the weights, as in conventional supervised learning, does not solve the memorization problem, which validates that the memorization problem differs from the standard overfitting problem.

Methods CNP CNP + Weight Decay CNP + BbB MR-CNP (W) (ours)
MSE 8.48 (0.12) 6.86 (0.27) 7.73 (0.82) 2.89 (0.18)
Table 3: Meta-testing MSE for the pose prediction problem. We compare MR-CNP (ours) with conventional CNP, CNP with weight decay, and CNP with Bayes-by-Backprop (BbB) regularization on all the weights. We report the average over 5 trials and standard deviation in parentheses.

6.3 Omniglot and MiniImagenet Classification

Next, we study memorization in the few-shot classification problem by adapting the few-shot Omniglot [21] and MiniImagenet [27, 36] benchmarks to the non-mutually-exclusive setting. In the non-mutually-exclusive N-way K-shot classification problem, each class is (randomly) assigned a fixed classification label from 1 to N. For each task, we randomly select a corresponding class for each classification label and task training data points and task test data points from that class222We assume that the number of classes in the meta-training set is larger than .. This ensures that each class takes only one classification label across tasks and different tasks are non-mutually-exclusive (See Appendix A.5.2 for details).

We evaluate MAML, TAML [17], MR-MAML (ours), fine-tuning, and a nearest neighbor baseline on non-mutually-exclusive classification tasks (Table 4). We find that MR-MAML significantly outperforms previous methods on all of these tasks. To better understand the problem, for the MAML variants, we calculate the pre-update accuracy (before adaptation on the task training data) on the meta-training data in Appendix Table 5. The high pre-update meta-training accuracy and low meta-test accuracy are evidence of the memorization problem for MAML and TAML, indicating that it is learning a model that ignores the task data. In contrast, MR-MAML successfully controls the pre-update accuracy to be near chance and encourages the learner to use the task training data to achieve low meta-training error, resulting in good performance at meta-test time.

Finally, we verify that meta-regularization does not degrade performance on the standard mutually-exclusive task. We evaluate performance as a function of regularization strength on the standard 20-way 1-shot Omniglot task (Appendix Figure 10), and we find that small values of lead to slight improvements over MAML. This indicates that meta-regularization substantially improves performance in the non-mutually-exclusive setting without degrading performance in other settings.

NME Omniglot 20-way 1-shot 20-way 5-shot MAML 7.8 (0.2) 50.7 (22.9) TAML [17] 9.6 (2.3) 67.9 (2.3) MR-MAML (W) (ours) 83.3 (0.8) 94.1 (0.1) NME MiniImagenet 5-way 1-shot 5-way 5-shot Fine-tuning 28.9 (0.5)) 49.8 (0.8)) Nearest-neighbor 41.1 (0.7) 51.0 (0.7) MAML 26.3 (0.7) 41.6 (2.6) TAML [17] 26.1 (0.6) 44.2 (1.7) MR-MAML (W) (ours) 43.6 (0.6) 53.8 (0.9)
Table 4: Meta-test accuracy on non-mutually-exclusive (NME) classification. The fine-tuning and nearest-neighbor baseline results for MiniImagenet are from [27].

7 Conclusion and Discussion

Meta-learning has achieved remarkable success in few-shot learning problems. However, we identify a pitfall of current algorithms: the need to create task distributions that are mutually exclusive. This requirement restricts the domains that meta-learning can be applied to. We formalize the failure mode, i.e. the memorization problem, that results from training on non-mutually-exclusive tasks and distinguish it as a function-level overfitting problem compared to the the standard label-level overfitting in supervised learning.

We illustrate the memorization problem with different meta-learning algorithms on a number of domains. To address the problem, we propose an algorithm-agnostic meta-regularization (MR) approach that leverages an information-theoretic perspective of the problem. The key idea is that by placing a soft restriction on the information flow from meta-parameters in prediction of test set labels, we can encourage the meta-learner to use task training data during meta-training. We achieve this by successfully controlling the complexity of model prior to the task adaptation.

The memorization issue is quite broad and is likely to occur in a wide range of real-world applications, for example, personalized speech recognition systems, learning robots that can adapt to different environments [24], and learning goal-conditioned manipulation skills using trial-and-error data. Further, this challenge may also be prevalent in other conditional prediction problems, beyond meta-learning, an interesting direction for future study. By both recognizing the challenge of memorization and developing a general and lightweight approach for solving it, we believe that this work represents an important step towards making meta-learning algorithms applicable to and effective on any problem domain.

References

Appendix A Appendix

a.1 Algorithm

We present the detailed algorithm for meta-regularization on weights with conditional neural processes (CNP) in Algorithm 1 and with model-agnostic meta-learning (MAML) in Algorithm 2. For CNP, we add the regularization on the weights of encoder and leave other weights unrestricted. For MAML, we similarly regularize the weights from input to an intermediate hidden layer and leave the weights for adaptation unregularized. In this way, we restrict the complexity of the pre-adaptation model not the post-adaptation model.

input :  Task distribution ; Encoder weights distribution with Gaussian parameters ; Prior distribution and Lagrangian multiplier ; that parameterizes feature extractor and decoder . Stepsize .
output : Network parameter , .
Initialize , randomly;
while not converged do
       Sample a mini-batch of from ;
       Sample with reparameterization ;
       for all  do
             Sample , from ;
             Encode observation , ;
             Compute task context with aggregator ;
            
      Update ;
       Update
Algorithm 1 Meta-Regularized CNP
input :  Task distribution ; Weights distribution with Gaussian parameters ; Prior distribution and Lagrangian multiplier ; Stepsize .
output : Network parameter , .
Initialize , randomly;
while not converged do
       Sample a mini-batch of from ;
       Sample with reparameterization ;
       for all  do
             Sample , from ;
             Encode observation , ;
             Compute task specific parameter ;
            
      Update ;
       Update
Algorithm 2 Meta-Regularized MAML
input :  Meta-testing task with training data and testing input , optimized parameters .
output : Prediction
for  from 1 to  do
       Sample ;
       Encode observation , ;
       Compute task specific parameter for MR-CNP and for MR-MAML;
       Predict
Return prediction
Algorithm 3 Meta-Regularized Methods in Meta-testing

a.2 Meta Regularization on Activations

We show that . By Figure 5, we have that

. By the chain rule of mutual information we have

(10)

a.3 Meta Regularization on Weights

Similar to [1], we use

to denote the unknown parameters of the true data generating distribution. This defines a joint distribution

. Furthermore, we have a predictive distribution .

The meta-training loss in Eq. 1 is an upper bound for the cross entropy . Using an information decomposition of cross entropy [1], we have

(11)

Here the only negative term is the , which quantifies the information that the meta-parameters contain about the meta-training data beyond what can be inferred from the data generating parameters (i.e., memorization). Without proper regularization, the cross entropy loss can be minimized by maximizing this term. We can control its value by upper bounding it

where the second equality follows because and are conditionally independent given . This gives the regularization in Section 4.2.

a.4 Proof of the PAC-Bayes Generalization Bound

First, we prove a more general result and then specialize it. The goal of the meta-learner is to extract information about the meta-training tasks and the test task training data to serve as a prior for test examples from the novel task. This information will be in terms of a distribution over possible models. When learning a new task, the meta-learner uses the training task data and a model parameterized by (sampled from ) and outputs a distribution over models. Our goal is to learn such that it performs well on novel tasks.

To formalize this, define

(12)

where is a bounded loss in . Then, we would like to minimize the error on novel tasks

(13)

Because we only have a finite training set, computing is intractable, but we can form an empirical estimate:

(14)

where for exposition we assume is the same for all . We would like to relate and , but the challenge is that may depend on due to the learning algorithm. There are two sources of generalization error: (i) error due to the finite number of observed tasks and (ii) error due to the finite number of examples observed per task. Closely following the arguments in [3], we apply a standard PAC-Bayes bound to each of these and combine the results with a union bound.

Theorem.

Let be a distribution over parameters and let be a prior distribution. Then for any , with probability at least , the following inequality holds uniformly for all distributions ,

(15)
Proof.

To start, we state a classical PAC-Bayes bound and use it to derive generalization bounds on task and datapoint level generalization, respectively.

Theorem 2.

Let be a sample space (i.e. a space of possible datapoints). Let be a distribution over (i.e. a data distribution). Let

be a hypothesis space. Given a “loss function

and a collection of i.i.d. random variables sampled from , , let be a prior distribution over hypotheses in that does not depend on the samples but may depend on the data distribution . Then, for any , the following bound holds uniformly for all posterior distributions over

(16)

Meta-level generalization First, we bound the task-level generalization, that is we relate to . Letting the samples be , and , then Theorem 1 says that for any

(17)

where is a prior over .

Within task generalization Next, we relate to via the PAC-Bayes bound. For a fixed task , task training data , a prior that only depends on the training data, and any , we have that

Now, we choose to be and restrict to be of the form for any . While, and may be complicated distributions (especially, if they are defined implicitly), we know that with this choice of and ,  [5], hence, we have