1 Introduction
The goal of metalearning is to extract common knowledge from a set of training tasks in order to solve heldout tasks more efficiently and accurately. One avenue for learning and reusing this knowledge is to learn a set of modules that can be reused or repurposed at test time as needed. Modularity is intrinsic to deep learning, and examples range from receptive fields or layers to larger components, such as perception or policy networks. This modularity enables pretrained convolutional neural networks to be rapidly finetuned on other image classification datasets, for example. However, metalearning algorithms that use testtime adaptation of a learned model, such as MAML
(Finn et al., 2017) and Reptile (Nichol et al., 2018), do not explicitly account for the modularity present in their models.We propose here a hierarchical Bayesian modelling approach to modular metalearning. The parameters within a module are assumed conditionally independent across tasks and their mean follows a normal distribution parameterized by a permodule “central” parameter and variance term, which acts as a local shrinkage parameter; see, e.g.,
Gelman et al. (2013). As the marginal likelihood is typically intractable in the scenarios we are interested in, we estimate the shrinkage parameters by maximizing an approximation of a predictive likelihood criterion using implicit differentiation. Empirically, we find that this approach is numerically stable, and we provide a theoretical analysis on a toy example suggesting it exhibits good properties. Related work is discussed in more detail in
Appendix A.We evaluate our shrinkagebased approach on two synthetic fewshot metalearning tasks as a proof of concept. We show that properly accounting for modularity is crucial for achieving good performance in these tasks. Our method outperforms Reptile (Nichol et al., 2018), MAML (Finn et al., 2017), and a modular variant of MAML.
2 Hierarchical Bayes formulation of modular metalearning
We consider a multitask learning scenario with a number of tasks bearing some similarity to each other. For each task indexed by , a finite set of observations is assumed available and is modelled using a probabilistic model with parameters ; these parameters being partitioned into modules; i.e. , where . For example, could be the weights of the th layer of a neural network for task .
To model the relationship between tasks, we adopt a hierarchical Bayesian approach. We assume that the parameters for a task are conditionally independent of those of all other tasks given some “central" parameters , with for all , where denotes the normal of mean with covariance matrix , and
is the identity matrix with appropriate dimensionality. The parameter
is a shrinkage parameter quantifying how can deviate from . If , then , i.e., when shrinks to zero the parameters of module become task independent; see Fig. 2 for an illustration. We assign a noninformative prior to and , and follow an empirical Bayes approach to learn their values from the data. This allows the model to automatically decide which modules to reuse and which to adapt.If we denote by the collection of observations corresponding to tasks and
, then the Bayesian hierarchical model considered here can be summarized by the following probability density (with graphical model shown in
Fig. 2)(1) 
A standard learning strategy is to maximize the marginal likelihood to obtain a point estimate of , and then compute the resulting posterior . However, in standard applications of metalearning, this approach does not scale due to the intractability of marginalization. In the following section, we propose an approximate, scalable, Bayesian approach to parameter estimation in this model.
3 Learning strategy
To deal with the fact that a large, possibly infinite, number of tasks is available, we propose an iterative algorithm that at each iteration first samples a batch of tasks for and collects the corresponding datasets . The resulting probabilistic model for these datasets is thus of the form (1). As the corresponding marginal likelihood and posterior
are typically intractable, it might be tempting to maximize the joint distribution (
1) w.r.t. to to estimate those parameters. Unfortunately, this approach fails as explained on a toy example in Appendix B. In short, even if the model is correctly specified, the optimal value of when maximizing the joint distribution is , and modules do not adapt when .We instead take an approach similar to that of many recent metalearning algorithms and split each dataset into train and validation subsets, and , respectively. We estimate the task parameters and shared meta parameters with MAP (maximum a posteriori) on the train subsets given . This is equivalent to maximizing the logjoint density in Eq. 1, giving
(2) 
Given these, we estimate the shrinkage parameters by maximizing the predictive loglikelihood on the validation subsets :
(3) 
where the marginalization over the posterior of is approximated by the MAP point estimate. Using the MAP approximation of within the predictive loglikelihood ensures that our metatrain and metatest time procedures match, and that the metatrain metric directly optimizes the metric being evaluated at metatest. Additionally, we show in Section B.2 that this provides a consistent estimate of on a toy example under regularity conditions. This type of endtoend metalearning objective is similar to various recent works such as Ravi and Larochelle (2017) and Finn et al. (2017). However, in contrast to their emphasis on fast adaptation with a small number of adaptation steps, we are interested in sample efficiency, and thus allow sufficient time for the task adaptation to converge. Solving Eq. 3 requires solving Eq. 2, which can require expensive secondorder derivatives; however, we show in Appendix C using implicit differentiation that we can approximate the derivative as
(4) 
Our resulting metalearning algorithm is shown in Algorithm 1, where corresponds to taking
steps with stochastic gradient descent (or an adaptive optimizer such as Adam) on loss
. Notice that Reptile Nichol et al. (2018) is a special case of our method when tends to and an appropriate learning rate is used. It is also possible to estimate with the predictive loglikelihood objective. We omit the derivation but the approximate gradient for is then equivalent to the firstorder MAML update Finn et al. (2017).4 Experimental evaluation
We evaluate our proposed method Shrinkage, along with variants of MAML (Finn et al., 2017) and Reptile (Nichol et al., 2018) on two synthetic fewshot metalearning domains constructed from hierarchical normal distributions. For each task , we first sample latent variables for each dimension , and then observations . Parameters and are fixed but unknown, and different dimensions of have different values. To assess the efficacy of the learning strategy, we use a data generating process that matches our modeling assumptions. The data distribution’s mean is a function of and is the main aspect that changes between experiments. The observation noise variance is a fixed and known diagonal matrix. The problem in each domain is to learn the parameters of a new task given a few observations . The main difference between the two evaluation domains is that is a linear function of in the first and is nonlinear in the second. Fig. 3 illustrates the experiments in D, and Appendix D contains a precise specification of both.
Experiment 1: Linear transform. 

Illustration of the hierarchical normal distributions for both experiments and corresponding performance results. The loss figures each show the mean generalization loss with 68% credible interval for
test tasks for all algorithms versus testtime adaptation steps.To compare the algorithms, we use the negative loglikelihood up to a constant as the loss, and compare the generalization loss of each algorithm after it adapts to the new task with multiple steps of gradient descent. For MAML and Shrinkage, we evaluate both the standard (nonmodular) versions and modular versions that learn modulespecific parameters, denoted by the “M” prefix. To ensure a fair comparison, we increase the flexibility of MAML to match Shrinkage by learning the learning rate of the innerloop gradient update for each module, similar to Antoniou et al. (2019), who do this to stabilize MAML as opposed to enabling modularity. Similarly, MShrinkage learns a separate for each module and Shrinkage learns a single for all modules. In both experiments, the modular algorithms treat each dimension of as a separate module, whereas the underlying distributions have only two modules, to make the task more challenging for the modular algorithms. We performed extensive random search for the hyperparameters of each algorithm, choosing the values that minimized the validation loss at metatest time.
Linear transform.
We begin with a simple model – a twomodule joint normal distribution over . To make each task nontrivial, we restrict the posterior to a narrow subspace near the hyperplane, causing gradient descent to converge slowly regardless of the number of observations. Fig. 2(a)shows a D example of this model as well as the performance of all algorithms on it. With the small number of observations for each task, the main challenge in this task is overfitting. While all algorithms are able to learn each module accurately, each module’s loss must reach its minimum at the same time in order to achieve the global minimum of the total loss, otherwise the some modules will begin to overfit while others still underfit. The modular algorithms, MShrinkage and MMAML, learn to adapt different modules at different rates and thus outperform their singlemodule counterparts. Among nonmodular algorithms, the generalization loss of Shrinkage is better than Reptile, and matches MAML at its best adaptation step. Without proper regularization, all versions of MAML and Reptile eventually overfit, whereas Shrinkage and MShrinkage do not.Nonlinear transform.
In the second experiment, we explore a more realistic scenario where the data is generated by a nonlinear transformation, again with two modules. The transformation is a spiral that rotates consecutive nonoverlapping pairs of parameters by an angle proportional to their distance from the origin. Fig. 2(b)shows an example of this transform applied to a D Gaussian. Fig. 2(b)shows the performance of all algorithms on this second experiment. Due to the narrow valley in the optimization landscape caused by the spiral transform, all algorithms require hundreds of adaptation steps to minimize the loss. Again, the modular algorithms (MShrinkage, MMAML) do well relative to their nonmodular counterparts and to Reptile, which overfits badly. In our experiments, MAML became unstable during training with more thaninner loop steps. As a result, the bestperforming hyperparameter value had a shorter adaptation horizon than required for this task, causing both MAML variants to perform worse. We also trained firstorder MAML models to try to avoid this instability but these also underperformed (see results in
Fig. 6, Appendix D) Overall, learning a modular prior for MShrinkage allows it to avoid overfitting and outperform the other methods in both of our experiments.5 Conclusions
We showed that explicitly accounting for modularity is important for good performance in fewshot metalearning. Our resulting algorithm has ties to MAML and contains Reptile as a special case, providing a new justification for its meta parameter update rule. Our analysis in the supplement highlights the importance of cross validation for metalearning. In future work, we plan to extend our analysis to include more general models, and to apply our Shrinkage algorithm to more challenging domains, such as fewshot classification and reinforcement learning.
References

Finn et al. (2017)
Chelsea Finn, Pieter Abbeel, and Sergey Levine.
Modelagnostic metalearning for fast adaptation of deep networks.
In
International Conference on Machine Learning
, pages 1126–1135, 2017.  Nichol et al. (2018) Alex Nichol, Joshua Achiam, and John Schulman. On firstorder metalearning algorithms. arXiv preprint arXiv:1803.02999, 2018.
 Gelman et al. (2013) Andrew Gelman, John B Carlin, Hal S Stern, David B Dunson, Aki Vehtari, and Donald B Rubin. Bayesian Data Analysis. Chapman and Hall/CRC, 2013.
 Ravi and Larochelle (2017) Sachin Ravi and Hugo Larochelle. Optimization as a model for fewshot learning. In International Conference on Learning Representations, 2017.
 Antoniou et al. (2019) Antreas Antoniou, Harrison Edwards, and Amos Storkey. How to train your MAML. In International Conference on Learning Representations, 2019.
 Oreshkin et al. (2018) Boris Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. Tadam: Task dependent adaptive metric for improved fewshot learning. In Conference on Neural Information Processing Systems, 2018.

Lee et al. (2019)
Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto.
Metalearning with differentiable convex optimization.
In
IEEE Computer Vision and Pattern Recognition
, 2019.  Yu et al. (2018) Tianhe Yu, Chelsea Finn, Sudeep Dasari, Annie Xie, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. Oneshot imitation from observing humans via domainadaptive metalearning. In Robotics: Science and Systems, 2018.
 Nagabandi et al. (2019) Anusha Nagabandi, Ignasi Clavera, Simin Liu, Ronald S Fearing, Pieter Abbeel, Sergey Levine, and Chelsea Finn. Learning to adapt in dynamic, realworld environments through metareinforcement learning. International Conference on Learning Representations, 2019.
 Chen et al. (2019) Yutian Chen, Yannis Assael, Brendan Shillingford, David Budden, Scott Reed, Heiga Zen, Quan Wang, Luis C. Cobo, Andrew Trask, Ben Laurie, Caglar Gulcehre, Aaron van den Oord, Oriol Vinyals, and Nando de Freitas. Sample efficient adaptive texttospeech. In International Conference on Learning Representations, 2019.
 Snell et al. (2017) Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for fewshot learning. In Conference on Neural Information Processing Systems, pages 4077–4087, 2017.
 Vinyals et al. (2016) Oriol Vinyals, Charles Blundell, Timothy P. Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In Conference on Neural Information Processing Systems, pages 3630–3638, 2016.
 Munkhdalai and Yu (2017) Tsendsuren Munkhdalai and Hong Yu. Meta networks. In International Conference on Machine Learning, pages 2554–2563, 2017.
 Santoro et al. (2016) Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Metalearning with memoryaugmented neural networks. In International Conference on Machine Learning, pages 1842–1850, 2016.
 Mishra et al. (2018) Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive metalearner. In International Conference on Learning Representations, 2018.
 Denevi et al. (2018) Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, and Massimiliano Pontil. Learning to learn around a common mean. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 10169–10179. 2018.
 Denevi et al. (2019) Giulia Denevi, Carlo Ciliberto, Riccardo Grazzi, and Massimiliano Pontil. Learningtolearn stochastic gradient descent with biased regularization. In International Conference on Machine Learning, pages 1566–1575, 2019.
 Khodak et al. (2019) Mikhail Khodak, Maria FlorinaBalcan, and Ameet Talwalkar. Adaptive gradientbased metalearning methods. arXiv preprint arXiv:1906.02717, 2019.
 Andrychowicz et al. (2016) Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. In Conference on Neural Information Processing Systems, pages 3981–3989, 2016.
 Wang et al. (2016) Jane X Wang, Zeb KurthNelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Rémi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763, 2016.
 Chen et al. (2017) Yutian Chen, Matthew W Hoffman, Sergio Gómez Colmenarejo, Misha Denil, Timothy P Lillicrap, Matt Botvinick, and Nando de Freitas. Learning to learn without gradient descent by gradient descent. In International Conference on Machine Learning, pages 748–756, 2017.
 Wu et al. (2018) Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. Understanding shorthorizon bias in stochastic metaoptimization. International Conference on Learning Representations, 2018.
 Grant et al. (2018) Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. Recasting gradientbased metalearning as hierarchical Bayes. In International Conference on Learning Representations, 2018.
 Yoon et al. (2018) Jaesik Yoon, Taesup Kim, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn. Bayesian modelagnostic metalearning. In Conference on Neural Information Processing Systems, pages 7332–7342, 2018.
 Finn et al. (2018) Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic modelagnostic metalearning. In Conference on Neural Information Processing Systems, pages 9516–9527, 2018.
 Ravi and Beatson (2019) Sachin Ravi and Alex Beatson. Amortized Bayesian metalearning. In International Conference on Learning Representations, 2019.
 Edwards and Storkey (2017) Harrison Edwards and Amos Storkey. Towards a neural statistician. In International Conference on Learning Representations, 2017.
 Garnelo et al. (2018) Marta Garnelo, Dan Rosenbaum, Chris J Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo J Rezende, and S.M. Ali Eslami. Conditional neural processes. arXiv preprint arXiv:1807.01613, 2018.
 Gordon et al. (2019) Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, and Richard E Turner. Metalearning probabilistic inference for prediction. International Conference on Learning Representations, 2019.
 Zintgraf et al. (2019) Luisa M. Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Fast context adaptation via metalearning. In International Conference on Machine Learning, 2019.
 Rusu et al. (2019) Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Metalearning with latent embedding optimization. In International Conference on Learning Representations, 2019.
 Lee and Choi (2018) Yoonho Lee and Seungjin Choi. Gradientbased metalearning with learned layerwise metric and subspace. In International Conference on Machine Learning, 2018.
 Alet et al. (2018) Ferran Alet, Tomás LozanoPérez, and Leslie P Kaelbling. Modular metalearning. In Conference on Robot Learning, 2018.
Appendix A Related work
Metalearning refers to a class of algorithms where models are quickly adapted to new tasks with few training examples by leveraging knowledge acquired from a set of training tasks. Metalearning has proven successful across a wide range of problems, including image classification Oreshkin et al. (2018); Lee et al. (2019), robotics Yu et al. (2018); Nagabandi et al. (2019), and speech synthesis Chen et al. (2019), and many different approaches have been investigated.
Metric based methods learn a kernel for an embedding space, which is then used at test time to relate unseen examples to already seen data Snell et al. (2017); Vinyals et al. (2016). Memory based methods Munkhdalai and Yu (2017); Santoro et al. (2016); Mishra et al. (2018) use external or internal memory architectures to store and leverage key training examples or history dependent information at test time. Optimization based methods instead modify the learning procedure, by learning good initialization (Finn et al., 2017; Nichol et al., 2018), regularization Denevi et al. (2018, 2019); Khodak et al. (2019), or an optimizer (Ravi and Larochelle, 2017; Andrychowicz et al., 2016; Wang et al., 2016; Chen et al., 2017) to output the parameters of the learned model, allowing the learner to quickly and effectively adapt to new tasks.
Our work builds on this third class of approaches. A popular example of these, MAML, directly learns an initialization of the network parameters, from which it can adapt to a new task from the task distribution in only a very small number of gradient steps (Finn et al., 2017). MAML requires expensive computation of second order derivatives and is hard to scale with the number of adaptation steps, subject to the short horizon bias Wu et al. (2018). Compared to works that emphasize fast adaptation and backpropagation of gradients through a limited number of consecutive gradient descent steps, our method optimizes task parameters toward the regularized optimum and backpropagates through the stationary point. We also show in this paper that Reptile (Nichol et al., 2018), which avoids computing the second order derivative by updating the meta parameters towards the average of the optimized task parameters, can be derived as a special case of our framework.
Several hierarchical Bayesian approaches and a variety of inference methods have been proposed for meta learning. Grant et al. (2018) introduced the first Bayesian variant of MAML using a Laplace approximation. Yoon et al. (2018) and Finn et al. (2018) introduced different probabilistic extensions to MAML, using approximate posteriors (either through an ensemble of particles or variational inference with a Gaussian posterior). Other methods have also proposed gradient based hierarchical Bayesian methods (e.g. Ravi and Beatson (2019)) where posterior uncertainty is captured via a learned variational distribution that allows efficient testtime variational inference after a few gradient update steps. Other works (e.g. Edwards and Storkey (2017); Garnelo et al. (2018); Gordon et al. (2019)) employ inference networks directly mapping from query set to variational latent variables without any gradient based optimization required at test time. These methods are more space and compute efficient although the capacity is limited by the size of the inference network. We also take a hierarchical Bayesian modeling approach and use the MAP estimate of task parameters for scalability. More sophisticated inference methods can be considered under our framework but we leave this for future work.
Related to our focus on modularity are a few metalearning works that do not update all the parameters of the network at test time. Instead, they split them into taskspecific and shared parameters Zintgraf et al. (2019), learn to produce network weights from taskspecific embeddings Rusu et al. (2019), or learn a layerwise subspace in which to do task specific gradientbased adaptation Lee and Choi (2018). Our hierarchical Bayesian approach with shrinkage prior provides a flexible framework for incorporating prior knowledge about model structures and task similarities. Recent work also proposed a compositional approach to modular metalearning (Alet et al., 2018), where at test time a search over module structure is performed with optional adaptation. Our method is complementary with theirs in that we learn how to optimally adapt or reuse modules at test time, which can be used within their adaptation phase.
Our method is also closely related to works on metaregularization Denevi et al. (2018, 2019); Khodak et al. (2019). While metainitialization aims at learning a good initialization for parameters of the network, metaregularization aims at learning a regularization parameter to avoid overfitting and improve stability. Our work differs from most works in this thread on the way of estimating the regularization strength in a modular model.
Appendix B A simple Gaussian example
To illustrate the asymptotic behavior of different learning strategies as , we consider a simple model with module, observation dimensionality, and normallydistributed observations,
Example (1D Normal observation).
The task likelihood function is
(5) 
We denote by and the true value of variable and in the following analysis.
b.1 Estimating all variables with MAP
We propose here to estimate the parameters by maximizing . Since we are interested in the case when , we assign a flat prior to the parameter without affecting the conclusion at the asymptotic case. Maximizing this quantity is equivalent (up to an additive constant) to maximizing the function
(6) 
Proposition 1 (Nonexistence of the joint MAP estimate of ).
does not have a global maximum and diverges to as , so that .
The proof follows directly from the definition of the joint.
Proposition 2 (Consistency of MAP estimate of ).
For any , estimate at the joint MAP value of is the sample mean across all tasks
where , and denotes a delta distribution.
Proof of Proposition 2.
Proposition 3 (Estimation of by gradient ascent).
Let and be at the MAP as a function of , and denote the sample variance of across tasks with . Maximizing the objective with respect to by gradient ascent will diverge at if either of the following two conditions is satisfied

,

is initialized within .
Otherwise, it converges to a local maximum .
Corollary 1.
As the number of training tasks , condition 1 is equivalent to
and the initialization boundary of in condition 2 converges to
beyond which the estimate converges at
(10) 
Proof of Proposition 3 and Corollary 1.
Setting the partial derivatives of Eq. 6 gives
(12) 
Now by plugging Eq. 11 into this expression, we have
(13) 
Hence we have a quadratic equation in which is given by
(14) 
where with
being a standard Chisquared random variable with a degree of freedom
.When the condition (15) or (16) does not hold, no stationary point exists and gradient ascent from any initialization will diverges toward . Figs. 3(c) and 3(a) illustrate the logposterior and its gradient in that case.
When the condition above is satisfied, there exist two (or one when the equality holds) roots at:
(17) 
that is asymptotically
(18) 
By checking the sign of the gradient and plugging in Eq. 11, we can find that the left root is a local minimum and the right root is a local maximum. So if one follows gradient ascent to estimate , the optimization will diverge toward 0 when is initialized below the left root, and converge to the second root otherwise. Figs. 3(d) and 3(b) illustrate the logposterior as a function of and its gradient when and are at their stationary point and condition 1 is satisfied. ∎




b.2 Estimating with predictive loglikelihood
Here we consider updating and following the same MAP estimate as above, but estimate with the approximate gradient of the logposterior (Eq. 4) on a set of independently sampled validation data where .
Proposition 4.
Proof.
Following Eq. 4, the approximate gradient of the validation loglikelihood wrt is
(19) 
Following the generating process of and , conditioned on and , the joint distribution of and with marginalized out is jointly normal and satisfies
(21) 
and is the sample average of conditionally independent random variables .
At the limit of , by expanding the product in Eq. 20 and taking the average of various terms with their expectations, we have
(22) 
We can find that at this limit, the update of will converge to the true value . ∎
Appendix C Derivation of the approximate gradient of predictive loglikelihood
Lemma 1.
(Implicit differentiation) Let be the stationary point of function , that is, then the gradient of can be computed as
(23) 
To compute the derivative of , denote the set of all the model parameters we take MAP estimate of as , and apply Lemma 1 to the logposterior on the training subset in Eq. 2, . We have
(25) 
where denotes the ’s block row of the matrix . We assume a normal distribution for the prior of , and then the second order derivatives are given as
(26)  
(27) 
where , , is the negative Hessian matrix of task ’s training loglikelihood, denotes an operator to convert a list of matrices with index to a block diagonal matrix with ’s diagonal element being , denotes an identity matrix of dimension and is the dimension of ’s module.
It is impractical to invert the Hessian matrix except for a simple model with a few tasks. We first make a block diagonal approximation
(28) 
and plug that into Eq. 25, and further into Eq. 4
where denotes the th column of the inverse matrix of . Unfortunately, the Hessian matrix of is still expensive to compute. We can either take a diagonal approximation
(29) 
where is the diagonal matrix of , or if the prior influence from is a lot stronger than the task observation, i.e., , we can ignore the second order derivative and further simplify as
(30) 
Appendix D Additional experiment details
We evaluate our proposed method Shrinkage, along with variants of MAML (Finn et al., 2017) and Reptile (Nichol et al., 2018) on two synthetic fewshot metalearning domains.
Recall from the main paper that the parameters and are fixed but unknown and different modules have different shrinkage parameters, . We use , and to denote the identity matrix and the lengthvector of s and , respectively. We sample
and
for each module , task , and . The data distribution’s mean is a function of and varies across experiments. The observation noise variance is a fixed and known diagonal matrix with dimensions . The problem in each domain is to learn the parameters of a new task given a few observations , where for all tasks. The main difference between the two evaluation domains is that is a linear function of in the first and is nonlinear in the second.
Experiment 1: Linear transformation
Experiment 1 defines a simple, modular model in which the final observation dimension is the sum of the terms, in order to create a valley in the optimization landscape. We use the following parameters:
with transformation
The two true modules are thus and but the algorithms treat each dimension as a separate module. The mean of the observations is thus in the first dimensions and in the final dimension. However, because is small in the final dimension (i.e., is small), the posterior of is restricted to a small subspace near the hyperplane. Gradient descent thus converges slowly regardless of the number of observations.
Experiment 2: Nonlinear transformation
Experiment 2 defines a more realistic optimization scenario for metalearning, where the data is generated by a nonlinear transformation of the task parameters. Specifically,
The true modules are and . The transformation is a “swirl” effect that rotates nonoverlapping pairs of consecutive parameters with an angle proportional to their distance from the origin. Specifically, each consecutive nonoverlapping pair is defined as
where
(31) 
and is the angular velocity of the rotation. This is a nonlinear volumepreserving mapping that forms a spiral in the observation space. Fig. 2(b) shows an example of this transform in dimensions when applied to a Gaussian.
Results using MAP estimator for
Fig. 4(a) and Fig. 4(b) show the test adaptation results from Experiments 1 and 2 for variants of our Shrinkage method. Specifically, these include results using the MAP estimator of (MShrinkage MAP) in place of the predictive likelihood (MShrinkage). In the first experiment, MShrinkage MAP simply underperforms the predictive likelihood. However, in the second experiment, the MAP estimate is extremely unstable and the learned value causes adaptation to diverge. These results provide empirical evidence for the failure of MAP estimation for detailed in Section B.1.


Results comparing MAML with FOMAML
Fig. 5(a) and Fig. 5(b) show the test adaptation results from Experiments 1 and 2 for variants of MAML. To try to avoid the instabilities of MAML at large numbers of adaptation steps, we also evaluated its firstorder variant, shown as FOMAML, and a modular variant of firstorder MAML (FOMMAML). Like our implementations of MAML and MMAML, FOMAML learns a single learning rate, and FOMMAML learns a learning rate per module. These are more stable during training but performance degrades after inner loop steps. We suspect that this is because firstorder MAML ignores the second derivative term and, after many adaptation steps, the direction of its gradient update for becomes uncorrelated with the correct update direction. As such, the performance of MAML and firstorder MAML are similar.


5 Conclusions
We showed that explicitly accounting for modularity is important for good performance in fewshot metalearning. Our resulting algorithm has ties to MAML and contains Reptile as a special case, providing a new justification for its meta parameter update rule. Our analysis in the supplement highlights the importance of cross validation for metalearning. In future work, we plan to extend our analysis to include more general models, and to apply our Shrinkage algorithm to more challenging domains, such as fewshot classification and reinforcement learning.
References

Finn et al. (2017)
Chelsea Finn, Pieter Abbeel, and Sergey Levine.
Modelagnostic metalearning for fast adaptation of deep networks.
In
International Conference on Machine Learning
, pages 1126–1135, 2017.  Nichol et al. (2018) Alex Nichol, Joshua Achiam, and John Schulman. On firstorder metalearning algorithms. arXiv preprint arXiv:1803.02999, 2018.
 Gelman et al. (2013) Andrew Gelman, John B Carlin, Hal S Stern, David B Dunson, Aki Vehtari, and Donald B Rubin. Bayesian Data Analysis. Chapman and Hall/CRC, 2013.
 Ravi and Larochelle (2017) Sachin Ravi and Hugo Larochelle. Optimization as a model for fewshot learning. In International Conference on Learning Representations, 2017.
 Antoniou et al. (2019) Antreas Antoniou, Harrison Edwards, and Amos Storkey. How to train your MAML. In International Conference on Learning Representations, 2019.
 Oreshkin et al. (2018) Boris Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. Tadam: Task dependent adaptive metric for improved fewshot learning. In Conference on Neural Information Processing Systems, 2018.

Lee et al. (2019)
Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto.
Metalearning with differentiable convex optimization.
In
IEEE Computer Vision and Pattern Recognition
, 2019.  Yu et al. (2018) Tianhe Yu, Chelsea Finn, Sudeep Dasari, Annie Xie, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. Oneshot imitation from observing humans via domainadaptive metalearning. In Robotics: Science and Systems, 2018.
 Nagabandi et al. (2019) Anusha Nagabandi, Ignasi Clavera, Simin Liu, Ronald S Fearing, Pieter Abbeel, Sergey Levine, and Chelsea Finn. Learning to adapt in dynamic, realworld environments through metareinforcement learning. International Conference on Learning Representations, 2019.
 Chen et al. (2019) Yutian Chen, Yannis Assael, Brendan Shillingford, David Budden, Scott Reed, Heiga Zen, Quan Wang, Luis C. Cobo, Andrew Trask, Ben Laurie, Caglar Gulcehre, Aaron van den Oord, Oriol Vinyals, and Nando de Freitas. Sample efficient adaptive texttospeech. In International Conference on Learning Representations, 2019.
 Snell et al. (2017) Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for fewshot learning. In Conference on Neural Information Processing Systems, pages 4077–4087, 2017.
 Vinyals et al. (2016) Oriol Vinyals, Charles Blundell, Timothy P. Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In Conference on Neural Information Processing Systems, pages 3630–3638, 2016.
 Munkhdalai and Yu (2017) Tsendsuren Munkhdalai and Hong Yu. Meta networks. In International Conference on Machine Learning, pages 2554–2563, 2017.
 Santoro et al. (2016) Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Metalearning with memoryaugmented neural networks. In International Conference on Machine Learning, pages 1842–1850, 2016.
 Mishra et al. (2018) Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive metalearner. In International Conference on Learning Representations, 2018.
 Denevi et al. (2018) Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, and Massimiliano Pontil. Learning to learn around a common mean. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 10169–10179. 2018.
 Denevi et al. (2019) Giulia Denevi, Carlo Ciliberto, Riccardo Grazzi, and Massimiliano Pontil. Learningtolearn stochastic gradient descent with biased regularization. In International Conference on Machine Learning, pages 1566–1575, 2019.
 Khodak et al. (2019) Mikhail Khodak, Maria FlorinaBalcan, and Ameet Talwalkar. Adaptive gradientbased metalearning methods. arXiv preprint arXiv:1906.02717, 2019.
 Andrychowicz et al. (2016) Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. In Conference on Neural Information Processing Systems, pages 3981–3989, 2016.
 Wang et al. (2016) Jane X Wang, Zeb KurthNelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Rémi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763, 2016.
 Chen et al. (2017) Yutian Chen, Matthew W Hoffman, Sergio Gómez Colmenarejo, Misha Denil, Timothy P Lillicrap, Matt Botvinick, and Nando de Freitas. Learning to learn without gradient descent by gradient descent. In International Conference on Machine Learning, pages 748–756, 2017.
 Wu et al. (2018) Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. Understanding shorthorizon bias in stochastic metaoptimization. International Conference on Learning Representations, 2018.
 Grant et al. (2018) Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. Recasting gradientbased metalearning as hierarchical Bayes. In International Conference on Learning Representations, 2018.
 Yoon et al. (2018) Jaesik Yoon, Taesup Kim, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn. Bayesian modelagnostic metalearning. In Conference on Neural Information Processing Systems, pages 7332–7342, 2018.
 Finn et al. (2018) Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic modelagnostic metalearning. In Conference on Neural Information Processing Systems, pages 9516–9527, 2018.
 Ravi and Beatson (2019) Sachin Ravi and Alex Beatson. Amortized Bayesian metalearning. In International Conference on Learning Representations, 2019.
 Edwards and Storkey (2017) Harrison Edwards and Amos Storkey. Towards a neural statistician. In International Conference on Learning Representations, 2017.
 Garnelo et al. (2018) Marta Garnelo, Dan Rosenbaum, Chris J Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo J Rezende, and S.M. Ali Eslami. Conditional neural processes. arXiv preprint arXiv:1807.01613, 2018.
 Gordon et al. (2019) Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, and Richard E Turner. Metalearning probabilistic inference for prediction. International Conference on Learning Representations, 2019.
 Zintgraf et al. (2019) Luisa M. Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Fast context adaptation via metalearning. In International Conference on Machine Learning, 2019.
 Rusu et al. (2019) Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Metalearning with latent embedding optimization. In International Conference on Learning Representations, 2019.
 Lee and Choi (2018) Yoonho Lee and Seungjin Choi. Gradientbased metalearning with learned layerwise metric and subspace. In International Conference on Machine Learning, 2018.
 Alet et al. (2018) Ferran Alet, Tomás LozanoPérez, and Leslie P Kaelbling. Modular metalearning. In Conference on Robot Learning, 2018.
Appendix A Related work
Metalearning refers to a class of algorithms where models are quickly adapted to new tasks with few training examples by leveraging knowledge acquired from a set of training tasks. Metalearning has proven successful across a wide range of problems, including image classification Oreshkin et al. (2018); Lee et al. (2019), robotics Yu et al. (2018); Nagabandi et al. (2019), and speech synthesis Chen et al. (2019), and many different approaches have been investigated.
Metric based methods learn a kernel for an embedding space, which is then used at test time to relate unseen examples to already seen data Snell et al. (2017); Vinyals et al. (2016). Memory based methods Munkhdalai and Yu (2017); Santoro et al. (2016); Mishra et al. (2018) use external or internal memory architectures to store and leverage key training examples or history dependent information at test time. Optimization based methods instead modify the learning procedure, by learning good initialization (Finn et al., 2017; Nichol et al., 2018), regularization Denevi et al. (2018, 2019); Khodak et al. (2019), or an optimizer (Ravi and Larochelle, 2017; Andrychowicz et al., 2016; Wang et al., 2016; Chen et al., 2017) to output the parameters of the learned model, allowing the learner to quickly and effectively adapt to new tasks.
Our work builds on this third class of approaches. A popular example of these, MAML, directly learns an initialization of the network parameters, from which it can adapt to a new task from the task distribution in only a very small number of gradient steps (Finn et al., 2017). MAML requires expensive computation of second order derivatives and is hard to scale with the number of adaptation steps, subject to the short horizon bias Wu et al. (2018). Compared to works that emphasize fast adaptation and backpropagation of gradients through a limited number of consecutive gradient descent steps, our method optimizes task parameters toward the regularized optimum and backpropagates through the stationary point. We also show in this paper that Reptile (Nichol et al., 2018), which avoids computing the second order derivative by updating the meta parameters towards the average of the optimized task parameters, can be derived as a special case of our framework.
Several hierarchical Bayesian approaches and a variety of inference methods have been proposed for meta learning. Grant et al. (2018) introduced the first Bayesian variant of MAML using a Laplace approximation. Yoon et al. (2018) and Finn et al. (2018) introduced different probabilistic extensions to MAML, using approximate posteriors (either through an ensemble of particles or variational inference with a Gaussian posterior). Other methods have also proposed gradient based hierarchical Bayesian methods (e.g. Ravi and Beatson (2019)) where posterior uncertainty is captured via a learned variational distribution that allows efficient testtime variational inference after a few gradient update steps. Other works (e.g. Edwards and Storkey (2017); Garnelo et al. (2018); Gordon et al. (2019)) employ inference networks directly mapping from query set to variational latent variables without any gradient based optimization required at test time. These methods are more space and compute efficient although the capacity is limited by the size of the inference network. We also take a hierarchical Bayesian modeling approach and use the MAP estimate of task parameters for scalability. More sophisticated inference methods can be considered under our framework but we leave this for future work.
Related to our focus on modularity are a few metalearning works that do not update all the parameters of the network at test time. Instead, they split them into taskspecific and shared parameters Zintgraf et al. (2019), learn to produce network weights from taskspecific embeddings Rusu et al. (2019), or learn a layerwise subspace in which to do task specific gradientbased adaptation Lee and Choi (2018). Our hierarchical Bayesian approach with shrinkage prior provides a flexible framework for incorporating prior knowledge about model structures and task similarities. Recent work also proposed a compositional approach to modular metalearning (Alet et al., 2018), where at test time a search over module structure is performed with optional adaptation. Our method is complementary with theirs in that we learn how to optimally adapt or reuse modules at test time, which can be used within their adaptation phase.
Our method is also closely related to works on metaregularization Denevi et al. (2018, 2019); Khodak et al. (2019). While metainitialization aims at learning a good initialization for parameters of the network, metaregularization aims at learning a regularization parameter to avoid overfitting and improve stability. Our work differs from most works in this thread on the way of estimating the regularization strength in a modular model.
Appendix B A simple Gaussian example
To illustrate the asymptotic behavior of different learning strategies as , we consider a simple model with module, observation dimensionality, and normallydistributed observations,
Example (1D Normal observation).
The task likelihood function is
(5) 
We denote by and the true value of variable and in the following analysis.
b.1 Estimating all variables with MAP
We propose here to estimate the parameters by maximizing . Since we are interested in the case when , we assign a flat prior to the parameter without affecting the conclusion at the asymptotic case. Maximizing this quantity is equivalent (up to an additive constant) to maximizing the function
(6) 
Proposition 1 (Nonexistence of the joint MAP estimate of ).
does not have a global maximum and diverges to as , so that .
The proof follows directly from the definition of the joint.
Proposition 2 (Consistency of MAP estimate of ).
For any , estimate at the joint MAP value of is the sample mean across all tasks
where , and denotes a delta distribution.
Comments
There are no comments yet.