1 Introduction
Supervised learning has proven extremely effective for many problems where large amounts of labeled training data are available. There is a common hope that unsupervised learning will prove similarly powerful in situations where labels are expensive, impractical to collect, or where the prediction target is unknown during training. Unsupervised learning however has yet to fulfill this promise. One explanation for this failure is that unsupervised representation learning algorithms are typically mismatched to the target task. Ideally, learned representations should linearly expose high level attributes of data (e.g. object identity) and perform well in semisupervised settings. Many current unsupervised objectives, however, optimize for objectives such as loglikelihood of a generative model or reconstruction error, producing useful representations only as a side effect.
Unsupervised representation learning seems uniquely suited for metalearning (Hochreiter et al., 2001; Schmidhuber, 1995). Unlike most tasks where metalearning is applied, unsupervised learning does not define an explicit objective, which makes it impossible to phrase the task as a standard optimization problem. It is possible, however, to directly express a metaobjective that captures the quality of representations produced by an unsupervised update rule by evaluating the usefulness of the representation for candidate tasks. In this work, we propose to metalearn an unsupervised update rule by metatraining on a metaobjective that directly optimizes the utility of the unsupervised representation. Unlike handdesigned unsupervised learning rules, this metaobjective directly targets the usefulness of a representation generated from unlabeled data for later supervised tasks.
By recasting unsupervised representation learning as metalearning, we treat the creation of the unsupervised update rule as a transfer learning problem. Instead of learning transferable features, we learn a transferable learning rule which does not require access to labels and generalizes across both data domains and neural network architectures. Although we focus on the metaobjective of semisupervised classification here, in principle a learning rule could be optimized to generate representations for
any subsequent task.2 Related Work
2.1 Unsupervised Representation Learning
Unsupervised learning is a topic of broad and diverse interest. Here we briefly review several techniques that can lead to a useful latent representation of a dataset. In contrast to our work, each method imposes a manually defined training algorithm or loss function whereas we
learn the algorithm that creates useful representations as determined by a metaobjective.Autoencoders (Hinton and Salakhutdinov, 2006) work by first compressing and optimizing reconstruction loss. Extensions have been made to denoise data (Vincent et al., 2008, 2010), as well as compress information in an information theoretic way (Kingma and Welling, 2013). Le et al. (2011) further explored scaling up these unsupervised methods to large image datasets.
Generative adversarial networks (Goodfellow et al., 2014) take another approach to unsupervised feature learning. Instead of a loss function, an explicit minmax optimization is defined to learn a generative model of a data distribution. Recent work has shown that this training procedure can learn unsupervised features useful for few shot learning (Radford et al., 2015; Donahue et al., 2016; Dumoulin et al., 2016).
Other techniques rely on selfsupervision where labels are easily generated to create a nontrivial ‘supervised’ loss. Domain knowledge of the input is often necessary to define these losses. Noroozi and Favaro (2016) use unscrambling jigsawlike crops of an image. Techniques used by Misra et al. (2016) and Sermanet et al. (2017) rely on using temporal ordering from videos.
Another approach to unsupervised learning relies on feature space design such as clustering. Coates and Ng (2012)
showed that kmeans can be used for feature learning.
Xie et al. (2016) jointly learn features and cluster assignments. Bojanowski and Joulin (2017) develop a scalable technique to cluster by predicting noise. Other techniques such as Schmidhuber (1992), Hochreiter and Schmidhuber (1999), and Olshausen and Field (1997) define various desirable properties about the latent representation of the input, such as predictability, complexity of encoding mapping, independence, or sparsity, and optimize to achieve these properties.2.2 Meta Learning
Most metalearning algorithms consist of two levels of learning, or ‘loops’ of computation: an inner loop, where some form of learning occurs (e.g. an optimization process), and an outer loop or metatraining loop, which optimizes some aspect of the inner loop, parameterized by metaparameters. The performance of the inner loop computation for a given set of metaparameters is quantified by a metaobjective. Metatraining is then the process of adjusting the metaparameters so that the inner loop performs well on this metaobjective. Metalearning approaches differ by the computation performed in the inner loop, the domain, the choice of metaparameters, and the method of optimizing the outer loop.
Some of the earliest work in metalearning includes work by Schmidhuber (1987), which explores a variety of metalearning and selfreferential algorithms. Similarly to our algorithm, Bengio et al. (1990, 1992) propose to learn a neuron local learning rule, though their approach differs in task and problem formulation. Additionally, Runarsson and Jonsson (2000) metalearn supervised learning rules which mix local and global network information. A number of papers propose metalearning for few shot learning (Vinyals et al., 2016; Ravi and Larochelle, 2016; Mishra et al., 2017; Finn et al., 2017; Snell et al., 2017), though these do not take advantage of unlabeled data. Others make use of both labeled and unlabeld data (Ren et al., 2018). Hsu et al. (2018) uses a task created with no supervision to then train fewshot detectors. Garg (2018) use metalearning for unsupervised learning, primarily in the context of clustering and with a small number of metaparameters.
To allow easy comparison against other existing approaches, we present a more extensive survey of previous work in metalearning in table form in Table 1, highlighting differences in choice of task, structure of the metalearning problem, choice of metaarchitecture, and choice of domain.
To our knowledge, we are the first metalearning approach to tackle the problem of unsupervised representation learning, where the inner loop consists of unsupervised learning. This contrasts with transfer learning, where a neural network is instead trained on a similar dataset, and then fine tuned or otherwise postprocessed on the target dataset. We additionally believe we are the first representation metalearning approach to generalize across input data modalities as well as datasets, the first to generalize across permutation of the input dimensions, and the first to generalize across neural network architectures (e.g. layer width, network depth, activation function).
3 Model Design
We consider a multilayer perceptron (MLP) with parameters
as the base model. The inner loop of our metalearning process trains this base model via iterative application of our learned update rule. See Figure 1 for a schematic illustration and Appendix A for a more detailed diagram.In standard supervised learning, the ‘learned’ optimizer is stochastic gradient descent (SGD). A supervised loss
is associated with this model, where is a minibatch of inputs, and are the corresponding labels. The parameters of the base model are then updated iteratively by performing SGD using the gradient . This supervised update rule can be written as , where denotes the innerloop iteration or step. Here are the metaparameters of the optimizer, which consist of hyperparameters such as learning rate and momentum.In this work, our learned update is a parametric function which does not depend on label information, . This form of the update rule is general, it encompasses many unsupervised learning algorithms and all methods in Section 2.1.
In traditional learning algorithms, expert knowledge or a simple hyperparameter search determines , which consists of a handful of metaparameters such as learning rate and regularization constants. In contrast, our update rule will have orders of magnitude more metaparameters, including the weights of a neural network. We train these metaparameters by performing SGD on the sum of the MetaObjective over the course of (inner loop) training in order to find optimal parameters ,
(1) 
that minimize the metaobjective over a distribution of training tasks. Note that is a function of since affects the optimization trajectory.
In the following sections, we briefly review the main components of this model: the base model, the UnsupervisedUpdate, and the MetaObjective. See the Appendix for a complete specification. Additionally, code and metatrained parameters for our metalearned UnsupervisedUpdate is available^{1}^{1}1https://github.com/tensorflow/models/tree/master/research/learning_unsupervised_learning.
3.1 Base model
Our base model consists of a standard fully connected multilayer perceptron (MLP), with batch normalization
(Ioffe and Szegedy, 2015), and ReLU nonlinearities. We chose this as opposed to a convolutional model to limit the inductive bias of convolutions in favor of learned behavior from the
UnsupervisedUpdate. We call the prenonlinearity activations , and postnonlinearity activations , where is the total number of layers, and is the network input (raw data). The parameters are , where and are the weights and biases (applied after batch norm) for layer , and are the corresponding weights used in the backward pass.3.2 Learned update rule
We wish for our update rule to generalize across architectures with different widths, depths, or even network topologies. To achieve this, we design our update rule to be neuronlocal, so that updates are a function of pre and post synaptic neurons in the base model, and are defined for any base model architecture. This has the added benefit that it makes the weight updates more similar to synaptic updates in biological neurons, which depend almost exclusively on local pre and postsynaptic neuronal activity (Whittington and Bogacz, 2017). In practice, we relax this constraint and incorporate some cross neuron information to decorrelate neurons (see Appendix G.5 for more information).
To build these updates, each neuron in every layer in the base model has an MLP, referred to as an update network, associated with it, with output where indexes the training minibatch. The inputs to the MLP are the feedforward activations ( & ) defined above, and feedback weights and an error signal ( and , respectively) which are defined below.
All update networks share metaparameters . Evaluating the statistics of unit activation over a batch of data has proven helpful in supervised learning (Ioffe and Szegedy, 2015). It has similarly proven helpful in handdesigned unsupervised learning rules, such as sparse coding and clustering. We therefore allow to accumulate statistics across examples in each training minibatch.
During an unsupervised training step, the base model is first run in a standard feedforward fashion, populating . As in supervised learning, an error signal is then propagated backwards through the network. Unlike in supervised backprop, however, this error signal is generated by the corresponding update network for each unit. It is read out by linear projection of the perneuron hidden state , , and propogated backward using a set of learned ‘backward weights’ , rather than the transpose of the forward weights as would be the case in backprop (diagrammed in Figure 1). This is done to be more biologically plausible (Lillicrap et al., 2016).
Again as in supervised learning, the weight updates () are a product of pre and postsynaptic signals. Unlike in supervised learning however, these signals are generated using the perneuron update networks: . The full weight update (which involves normalization and decorrelation across neurons) is defined in Appendix G.5.
3.3 Metaobjective
The metaobjective determines the quality of the unsupervised representations. In order to metatrain via SGD, this loss must be differentiable. The metaobjective we use in this work is based on fitting a linear regression to labeled examples with a small number of data points. In order to encourage the learning of features that generalize well, we estimate the linear regression weights on one minibatch
of data points, and evaluate the classification performance on a second minibatch also with datapoints,(2) 
where ,
are features extracted from the base model on data
, , respectively. The target labels ,consist of one hot encoded labels and potentially also regression targets from data augmentation (e.g. rotation angle, see Section
4.2). We found that using a cosine distance, , rather than unnormalized squared error improved stability. Note this metaobjective is only used during metatraining and not used when applying the learned update rule. The inner loop computation is performed without labels via the UnsupervisedUpdate.4 Training the Update Rule
4.1 Approximate Gradient Based Training
We choose to metaoptimize via SGD as opposed to reinforcement learning or other black box methods, due to the superior convergence properties of SGD in high dimensions, and the high dimensional nature of
. Training and computing derivatives through long recurrent computation of this form is notoriously difficult (Pascanu et al., 2013). To improve stability and reduce the computational cost we approximate the gradients via truncated backprop through time (Shaban et al., 2018). Many additional design choices were also crucial to achieving stability and convergence in metalearning, including the use of batch norm, and restricting the norm of the UnsupervisedUpdate update step (a full discussion of these and other choices is in Appendix B).4.2 Metatraining Distribution and Generalization
Generalization in our learned optimizer comes from both the form of the UnsupervisedUpdate (Section 3.2), and from the metatraining distribution. Our metatraining distribution is composed of both datasets and base model architectures.
We construct a set of training tasks consisting of CIFAR10 (Krizhevsky and Hinton, 2009)
and multiclass classification from subsets of classes from Imagenet
(Russakovsky et al., 2015) as well as from a dataset consisting of rendered fonts (Appendix H.1.1). We find that increased training dataset variation actually improves the metaoptimization process. To reduce computation we restrict the input data to 16x16 pixels or less during metatraining, and resize all datasets accordingly. For evaluation, we use MNIST (LeCun et al., 1998), Fashion MNIST (Xiao et al., 2017), IMDB (Maas et al., 2011), and a holdout set of Imagenet classes. We additionally sample the base model architecture. We sample number of layers uniformly between 25 and the number of units per layer logarithmically between 64 to 512.As part of preprocessing, we permute all inputs along the feature dimension, so that the UnsupervisedUpdate must learn a permutation invariant learning rule. Unlike other work, we focus explicitly on learning a learning algorithm as opposed to the discovery of fixed feature extractors that generalize across similar tasks. This makes the learning task much harder, as the UnsupervisedUpdate has to discover the relationship between pixels based solely on their joint statistics, and cannot “cheat” and memorize pixel identity. To provide further dataset variation, we additionally augment the data with shifts, rotations, and noise. We add these augmentation coefficients as additional regression targets for the metaobjective–e.g. rotate the image and predict the rotation angle as well as the image class. For additional details, see Appendix H.1.1.
4.3 Distributed Implementation
We implement the above models in distributed TensorFlow
(Abadi et al., 2016). Training uses 512 workers, each of which performs a sequence of partial unrolls of the inner loop UnsupervisedUpdate, and computes gradients of the metaobjective asynchronously. Training takes 8 days, and consists of 200 thousand updates to with minibatch size 256. Additional details are in Appendix C.5 Experimental Results
First, we examine limitations of existing unsupervised and meta learning methods. Then, we show metatraining and generalization properties of our learned optimizer and finally we conclude by visualizing how our learned update rule works. For details of the experimental setup, see Appendix H.
5.1 Objective Function Mismatch and Existing MetaLearning Methods
To illustrate the negative consequences of objective function mismatch in unsupervised learnin algorithms, we train a variational autoencoder on 16x16 CIFAR10. Over the course of training we evaluate classification performance from few shot classification using the learned latent representations. Training curves can be seen in Figure 2. Despite continuing to improve the VAE objective throughout training (not shown here), the classification accuracy decreases sharply later in training.
To demonstrate the reduced generalization that results from learning transferable features rather than an update algorithm, we train a prototypical network (Snell et al., 2017) with and without the input shuffling described in Section 4.2. As the prototypical network primarily learns transferrable features, performance is significantly hampered by input shuffling. Results are in Figure 2.
5.2 MetaOptimization
While training, we monitor a rolling average of the metaobjective averaged across all datasets, model architectures, and the number of unrolling steps performed. In Figure 3 the training loss is continuing to decrease after 200 hours of training, which suggests that the approximate training techniques still produce effective learning. In addition to this global number, we measure performance obtained by rolling out the UnsupervisedUpdate on various metatraining and metatesting datasets. We see that on held out image datasets, such as MNIST and Fashion Mnist, the evaluation loss is still decreasing. However, for datasets in a different domain, such as IMDB sentiment prediction (Maas et al., 2011), we start to see metaoverfitting. For all remaining experimental results, unless otherwise stated, we use metaparameters, , for the UnsupervisedUpdate resulting from 200 hours of metatraining.
5.3 Generalization
The goal of this work is to learn a general purpose unsupervised representation learning algorithm. As such, this algorithm must be able to generalize across a wide range of scenarios, including tasks that are not sampled i.i.d. from the metatraining distribution. In the following sections, we explore a subset of the factors we seek to generalize over.
Generalizing over datasets and domains
In Figure 4, we compare performance on few shot classification with 10 examples per class. We evaluate test performance on holdout datasets of MNIST and Fashion MNIST at 2 resolutions: 1414 and 2828 (larger than any dataset experienced in metatraining). On the same base model architecture, our learned UnsupervisedUpdate leads to performance better than a variational autoencoder, supervised learning on the labeled examples, and random initialization with trained readout layer.
generalizes to unseen datasets. Our learned update rule produces representations more suitable for few shot classification than those from random initialization or a variational autoecoder and outperforms fully supervised learning on the same labeled examples. Error bars show standard error.
Right: Early in metatraining (purple), the UnsupervisedUpdate is able to learn useful features on a 2 way text classification data set, IMDB, despite being metatrained only from image datasets. Later in metatraining (red) performance drops due to the domain mismatch. We show innerloop training, consisting of 5k applications of the UnsupervisedUpdate evaluating the MetaObjective each iteration. Error bars show standard error across 10 runs.To further explore generalization limits, we test our learned optimizer on data from a vastly different domain. We train on a binary text classification dataset: IMDB movie reviews (Maas et al., 2011), encoded by computing a bag of words with 1K words. We evaluate using a model 30 hours and 200 hours into metatraining (see Figure 4). Despite being trained exclusively on image datasets, the 30 hour learned optimizer improves upon the random initialization by almost 10%. When metatraining for longer, however, the learned optimizer “metaoverfits” to the image domain resulting in poor performance. This performance is quite low in an absolute sense, for this task. Nevertheless, we find this result very exciting as we are unaware of any work showing this kind of transfer of learned rules from images to text.
Generalizing over network architectures
We train models of varying depths and unit counts with our learned optimizer and compare results at different points in training (Figure 5). We find that despite only training on networks with 2 to 5 layers and 64 to 512 units per layer, the learned rule generalizes to 11 layers and 10,000 units per layer.
Next we look at generalization over different activation functions. We apply our learned optimizer on base models with a variety of different activation functions. Performance evaluated at different points in training (Figure 5). Despite training only on ReLU activations, our learned optimizer is able to improve on random initializations in all cases. For certain activations, leaky ReLU (Maas et al., 2013) and Swish (Ramachandran et al., 2017), there is little to no decrease in performance. Another interesting case is the step activation function. These activations are traditionally challenging to train as there is no useful gradient signal. Despite this, our learned UnsupervisedUpdate is capable of optimizing as it does not use base model gradients, and achieves performance double that of random initialization.
5.4 How it Learns and How it Learns to Learn
To analyze how our learned optimizer functions, we analyze the first layer filters over the course of metatraining. Despite the permutation invariant nature of our data (enforced by shuffling input image pixels before each unsupervised training run), the base model learns features such as those shown in Figure 6, which appear templatelike for MNIST, and localfeaturelike for CIFAR10. Early in training, there are coarse features, and a lot of noise. As the metatraining progresses, more interesting and local features emerge.
In an effort to understand what our algorithm learns to do, we fed it data from the two moons dataset. We find that despite being a 2D dataset, dissimilar from the image datasets used in metatraining, the learned model is still capable of manipulating and partially separating the data manifold in a purely unsupervised manner (Figure 6
). We also find that almost all the variance in the embedding space is dominated by a few dimensions. As a comparison, we do the same analysis on MNIST. In this setting, the explained variance is spread out over more of the principal components. This makes sense as the generative process contains many more latent dimensions – at least enough to express the 10 digits.
Cumulative variance explained using principal components analysis (PCA) on the learned representations. The representation for two moons data (red) is much lower dimensional than MNIST (blue), although both occupy a fraction of the full 32dimensional space.
6 Discussion
In this work we metalearn an unsupervised representation learning update rule. We show performance that matches or exceeds existing unsupervised learning on held out tasks. Additionally, the update rule can train models of varying widths, depths, and activation functions. More broadly, we demonstrate an application of metalearning for learning complex optimization tasks where no objective is explicitly defined. Analogously to how increased data and compute have powered supervised learning, we believe this work is a proof of principle that the same can be done with algorithm design–replacing hand designed techniques with architectures designed for learning and learned from data via metalearning.
Acknowledgments
We would like to thank Samy Bengio, David Dohan, Keren Gu, Gamaleldin Elsayed, C. Daniel Freeman, Sam Greydanus, Nando de Freitas, Ross Goroshin, Ishaan Gulrajani, Eric Jang, Hugo Larochelle, Jeremy Nixon, Esteban Real, Suharsh Sivakumar, Pavel Sountsov, Alex Toshev, George Tucker, Hoang Trieu Trinh, Olga Wichrowska, Lechao Xiao, Zongheng Yang, Jiaqi Zhai and the rest of the Google Brain team for extremely helpful conversations and feedback on this work.
References
 Hochreiter et al. [2001] Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pages 87–94. Springer, 2001.
 Schmidhuber [1995] Juergen Schmidhuber. On learning how to learn learning strategies. 1995.
 Hinton and Salakhutdinov [2006] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504–507, 2006.

Vincent et al. [2008]
Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and PierreAntoine Manzagol.
Extracting and composing robust features with denoising autoencoders.
InProceedings of the 25th international conference on Machine learning
, pages 1096–1103. ACM, 2008.  Vincent et al. [2010] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and PierreAntoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(Dec):3371–3408, 2010.
 Kingma and Welling [2013] Diederik P Kingma and Max Welling. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
 Le et al. [2011] Quoc V Le, Marc’Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S Corrado, Jeff Dean, and Andrew Y Ng. Building highlevel features using large scale unsupervised learning. arXiv preprint arXiv:1112.6209, 2011.
 Goodfellow et al. [2014] Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
 Radford et al. [2015] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
 Donahue et al. [2016] Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016.
 Dumoulin et al. [2016] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.

Noroozi and Favaro [2016]
Mehdi Noroozi and Paolo Favaro.
Unsupervised learning of visual representations by solving jigsaw
puzzles.
In
European Conference on Computer Vision
, pages 69–84. Springer, 2016.  Misra et al. [2016] Ishan Misra, C Lawrence Zitnick, and Martial Hebert. Shuffle and learn: unsupervised learning using temporal order verification. In European Conference on Computer Vision, pages 527–544. Springer, 2016.
 Sermanet et al. [2017] Pierre Sermanet, Corey Lynch, Jasmine Hsu, and Sergey Levine. Timecontrastive networks: Selfsupervised learning from multiview observation. arXiv preprint arXiv:1704.06888, 2017.
 Coates and Ng [2012] Adam Coates and Andrew Y Ng. Learning feature representations with kmeans. In Neural networks: Tricks of the trade, pages 561–580. Springer, 2012.

Xie et al. [2016]
Junyuan Xie, Ross Girshick, and Ali Farhadi.
Unsupervised deep embedding for clustering analysis.
In International conference on machine learning, pages 478–487, 2016.  Bojanowski and Joulin [2017] Piotr Bojanowski and Armand Joulin. Unsupervised learning by predicting noise. arXiv preprint arXiv:1704.05310, 2017.
 Schmidhuber [1992] Jürgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863–879, 1992.
 Hochreiter and Schmidhuber [1999] Sepp Hochreiter and Jürgen Schmidhuber. Feature extraction through lococode. Neural Computation, 11(3):679–714, 1999.
 Olshausen and Field [1997] Bruno A Olshausen and David J Field. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research, 37(23):3311–3325, 1997.
 Schmidhuber [1987] Jürgen Schmidhuber. Evolutionary principles in selfreferential learning, or on learning how to learn: the metameta… hook. PhD thesis, Technische Universität München, 1987.
 Bengio et al. [1990] Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule. Université de Montréal, Département d’informatique et de recherche opérationnelle, 1990.
 Bengio et al. [1992] Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. On the optimization of a synaptic learning rule. In Preprints Conf. Optimality in Artificial and Biological Neural Networks, pages 6–8. Univ. of Texas, 1992.

Runarsson and Jonsson [2000]
Thomas Philip Runarsson and Magnus Thor Jonsson.
Evolution and design of distributed learning rules.
In
Combinations of Evolutionary Computation and Neural Networks, 2000 IEEE Symposium on
, pages 59–63. IEEE, 2000.  Vinyals et al. [2016] Oriol Vinyals, Charles Blundell, Tim Lillicrap, koray kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 3630–3638. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/6385matchingnetworksforoneshotlearning.pdf.
 Ravi and Larochelle [2016] Sachin Ravi and Hugo Larochelle. Optimization as a model for fewshot learning. International Conference on Learning Representations, 2016.
 Mishra et al. [2017] Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. Metalearning with temporal convolutions. arXiv preprint arXiv:1707.03141, 2017.
 Finn et al. [2017] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Modelagnostic metalearning for fast adaptation of deep networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1126–1135, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/finn17a.html.
 Snell et al. [2017] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for fewshot learning. In Advances in Neural Information Processing Systems, pages 4080–4090, 2017.
 Ren et al. [2018] Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B Tenenbaum, Hugo Larochelle, and Richard S Zemel. Metalearning for semisupervised fewshot classification. arXiv preprint arXiv:1803.00676, 2018.
 Hsu et al. [2018] Kyle Hsu, Sergey Levine, and Chelsea Finn. Unsupervised learning via metalearning. arXiv preprint arXiv:1810.02334, 2018.
 Garg [2018] Vikas Garg. Supervising unsupervised learning. In Advances in Neural Information Processing Systems, pages 4996–5006, 2018.
 Jones [2001] Donald R Jones. A taxonomy of global optimization methods based on response surfaces. Journal of global optimization, 21(4):345–383, 2001.
 Snoek et al. [2012] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951–2959, 2012.
 Bergstra et al. [2011] James S Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyperparameter optimization. In Advances in neural information processing systems, pages 2546–2554, 2011.
 Bergstra and Bengio [2012] James Bergstra and Yoshua Bengio. Random search for hyperparameter optimization. Journal of Machine Learning Research, 13(Feb):281–305, 2012.
 Stanley and Miikkulainen [2002] Kenneth O Stanley and Risto Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary computation, 10(2):99–127, 2002.
 Zoph and Le [2017] Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. International Conference on Learning Representations, 2017. URL https://arxiv.org/abs/1611.01578.
 Baker et al. [2017] Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network architectures using reinforcement learning. International Conference on Learning Representations, 2017.

Zoph et al. [2018]
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le.
Learning transferable architectures for scalable image recognition.
Proceedings of the IEEE conference on computer vision and pattern recognition
, 2018.  Real et al. [2017] Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Quoc Le, and Alex Kurakin. Largescale evolution of image classifiers. arXiv preprint arXiv:1703.01041, 2017.
 Maclaurin et al. [2015] Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradientbased hyperparameter optimization through reversible learning. In International Conference on Machine Learning, pages 2113–2122, 2015.
 Andrychowicz et al. [2016] Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pages 3981–3989, 2016.
 Chen et al. [2016] Yutian Chen, Matthew W Hoffman, Sergio Gómez Colmenarejo, Misha Denil, Timothy P Lillicrap, Matt Botvinick, and Nando de Freitas. Learning to learn without gradient descent by gradient descent. arXiv preprint arXiv:1611.03824, 2016.
 Li and Malik [2017] Ke Li and Jitendra Malik. Learning to optimize. International Conference on Learning Representations, 2017.
 Wichrowska et al. [2017] Olga Wichrowska, Niru Maheswaranathan, Matthew W Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Nando de Freitas, and Jascha SohlDickstein. Learned optimizers that scale and generalize. International Conference on Machine Learning, 2017.
 Bello et al. [2017] Irwan Bello, Barret Zoph, Vijay Vasudevan, and Quoc Le. Neural optimizer search with reinforcement learning. 2017. URL https://arxiv.org/pdf/1709.07417.pdf.
 Houthooft et al. [2018] Rein Houthooft, Richard Y Chen, Phillip Isola, Bradly C Stadie, Filip Wolski, Jonathan Ho, and Pieter Abbeel. Evolved policy gradients. arXiv preprint arXiv:1802.04821, 2018.
 Ioffe and Szegedy [2015] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 448–456, Lille, France, 07–09 Jul 2015. PMLR. URL http://proceedings.mlr.press/v37/ioffe15.html.

Whittington and Bogacz [2017]
James CR Whittington and Rafal Bogacz.
An approximation of the error backpropagation algorithm in a predictive coding network with local hebbian synaptic plasticity.
Neural computation, 29(5):1229–1262, 2017. 
Lillicrap et al. [2016]
Timothy P Lillicrap, Daniel Cownden, Douglas B Tweed, and Colin J Akerman.
Random synaptic feedback weights support error backpropagation for deep learning.
Nature communications, 7:13276, 2016. 
Pascanu et al. [2013]
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio.
On the difficulty of training recurrent neural networks.
In International Conference on Machine Learning, pages 1310–1318, 2013.  Shaban et al. [2018] Amirreza Shaban, ChingAn Cheng, Nathan Hatch, and Byron Boots. Truncated backpropagation for bilevel optimization. arXiv preprint arXiv:1810.10667, 2018.
 Krizhevsky and Hinton [2009] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
 Russakovsky et al. [2015] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li FeiFei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. doi: 10.1007/s112630150816y.
 LeCun et al. [1998] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 Xiao et al. [2017] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashionmnist: a novel image dataset for benchmarking machine learning algorithms, 2017.

Maas et al. [2011]
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and
Christopher Potts.
Learning word vectors for sentiment analysis.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/P111015.  Abadi et al. [2016] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for largescale machine learning. In OSDI, volume 16, pages 265–283, 2016.
 Maas et al. [2013] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural network acoustic models. In Proc. icml, volume 30, page 3, 2013.
 Ramachandran et al. [2017] Prajit Ramachandran, Barret Zoph, and Quoc Le. Searching for activation functions. 2017.
 Pascanu et al. [2012] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. Understanding the exploding gradient problem. CoRR, abs/1211.5063, 2012.
 Kingma and Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 Mnih et al. [2013] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
 Pearlmutter [1996] Barak Pearlmutter. An investigation of the gradient descent process in neural networks. PhD thesis, Carnegie Mellon University Pittsburgh, PA, 1996.
 Schoenholz et al. [2016] Samuel S Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha SohlDickstein. Deep information propagation. arXiv preprint arXiv:1611.01232, 2016.
Appendix A More Detailed System Diagram
Appendix B Stabilizing gradient based metalearning training
Training and computing derivatives through recurrent computation of this form is notoriously difficult Pascanu et al. [2013]
. Training parameters of recurrent systems in general can lead to chaos. We used the usual techniques such as gradient clipping
[Pascanu et al., 2012], small learning rates, and adaptive learning rate methods (in our case Adam [Kingma and Ba, 2014]), but in practice this was not enough to train most UnsupervisedUpdate architectures. In this section we address other techniques needed for stable convergence.When training with truncated backprop the problem shifts from pure optimization to something more like optimizing on a Markov Decision Process where the state space is the basemodel weights,
, and the ‘policy’ is the learned optimizer. While traversing these states, the policy is constantly metaoptimized and changing, thus changing the distribution of states the optimizer reaches. This type of noni.i.d training has been discussed at great length with respect to on and offpolicy RL training algorithms [Mnih et al., 2013]. Other works cast optimizer metalearning as RL [Li and Malik, 2017] for this very reason, at a large cost in terms of gradient variance. In this work, we partially address this issue by training a large number of workers in parallel, to always maintain a diverse set of states when computing gradients.For similar reasons, the number of steps per truncation, and the total number of unsupervised training steps, are both sampled in an attempt to limit the bias introduced by truncation.
We found restricting the maximum inner loop step size to be crucial for stability. Pearlmutter [1996] studied the effect of learning rates with respect to the stability of optimization and showed that as the learning rate increases gradients become chaotic. This effect was later demonstrated with respect to neural network training in Maclaurin et al. [2015]. If learning rates are not constrained, we found that they rapidly grew and entered this chaotic regime.
Another technique we found useful in addressing these problems is the use of batch norm in both the base model and in the UnsupervisedUpdate rule. Multilayer perceptron training traditionally requires very precise weight initialization for learning to occur. Poorly scaled initialization can make learning impossible [Schoenholz et al., 2016]. When applying a learned optimizer, especially early in metatraining of the learned optimizer, it is very easy for the learned optimizer to cause high variance weights in the base model, after which recovery is difficult. Batch norm helps solve this issues by making more of the weight space usable.
Appendix C Distributed Implementation
We implement the described models in distributed Tensorflow [Abadi et al., 2016]. We construct a cluster of 512 workers, each of which computes gradients of the metaobjective asynchronously. Each worker trains on one task by first sampling a dataset, architecture, and a number of training steps. Next, each worker samples unrolling steps, does applications of the , computes the MetaObjective on each new state, computes and sends this gradient to a parameter server. The final basemodel state, , is then used as the starting point for the next unroll until the specified number of steps is reached. These gradients from different workers are batched and is updated with asynchronous SGD. By batching gradients as workers complete unrolls, we eliminate most gradient staleness while retaining the compute efficiency of asynchronous workers, especially given heterogeneous workloads which arise from dataset and model size variation. An overview of our training can be seen in algorithm F. Due to the small base models and the sequential nature of our compute workloads, we use multi core CPUs as opposed to GPUs. Training occurs over the course of 8 days with 200 thousand updates to with minibatch size 256.
Appendix D More Filters Over MetaTraining
Appendix E MetaTraining curves
Appendix F Learning algorithm
Appendix G Model specification
In this section we describe the details of our base model and learned optimizer. First, we describe the inner loop, and then we describe the metatraining procedure in more depth.
In the following sections, denotes a hyperparameter for the learned update rule.
The design goals of this system are stated in the text body. At the time of writing, we were unaware of any similar systems, so as such the space of possible architectures was massive. Future work consists of simplifying and improving abstractions around algorithmic components.
An open source implementation of the UnsupervisedUpdate can be found at https://github.com/tensorflow/models/tree/master/research/learning_unsupervised_learning.
g.1 Inner Loop
The inner loop computation consists of iterative application of the UnsupervisedUpdate on a Base Model parameterized by ,
(App.1) 
where consists of forward weights, , biases, , parameterizing a multilayer perceptron as well as backward weights, used by UnsupervisedUpdate when innerloop training.
This computation can be broken down further as a forward pass on an unlabeled batch of data,
(App.2)  
(App.3) 
where , and are the pre and postactivations on layer , and is input data. We then compute weight updates:
(App.4) 
Finally, the next set of forward weights (), biases (), and backward weights (), are computed. We use an SGD like update but with the addition of a decay term. Equivalently, this can be seen as setting the weights and biases to an exponential moving average of the , , and terms.
(App.5)  
(App.6)  
(App.7) 
We use in our work. , , are computed via metalearned functions (parameterized by ).
In the following sections, we describe the functional form of the base model, , as well as the functional form of .
g.2 Base Model
The base model, the model our learned update rule is training, is an layer multi layer perception with batch norm. We define as this model’s parameters, consisting of weights () and biases () as well as the backward weights () used only during innerloop training (applications of UnsupervisedUpdate). We define to be the sizes of the layers of the base model and to be the size of the input data.
(App.8) 
where
(App.9)  
(App.10)  
(App.11) 
where is the hidden size of the network, is the input size of data, and is the size of the output embedding. In this work, we fix the output layer: and vary the remaining the intermediate hidden sizes.
The forward computation parameterized by , consumes batches of unlabeled data from a dataset :
(App.12)  
(App.13)  
(App.14) 
for .
We define as a function that returns the set of internal pre and postactivation function hidden states as well as as the function returning the final hidden state:
(App.15)  
(App.16) 
g.3 MetaObjective
We define the MetaObjective to be a few shot linear regression. To increase stability and avoid undesirable loss landscapes, we additionally center, as well as normalize the predicted target before doing the loss computation. The full computation is as follows:
(App.17) 
where is the number of classes, and , are one hot encoded labels.
First, the inputs are converted to embeddings with the base model,
(App.18)  
(App.19) 
Next, we center and normalize the prediction targets. We show this for , but is processed identically.
(App.20)  
(App.21) 
We then solve for the linear regression weights in closed form with features: and targets: . We account for a bias by concatenating a ’s vector to the features.
(App.22)  
(App.23) 
We then use these inferred regression weights to make a prediction on the second batch of data, normalize the resulting prediction, and compute a final loss,
(App.24)  
(App.25)  
(App.26) 
Note that due to the normalization of predictions and targets, this corresponds to a cosine distance (up to an offset and multiplicative factor).
g.4 UnsupervisedUpdate
The learned update rule is parameterized by . In the following section, we denote all instances of with a subscript to be separate named learnable parameters of the UnsupervisedUpdate. is shared across all instantiations of the unsupervised learning rule (shared across layers). It is this weight sharing that lets us generalize across different network architectures.
The computation is split into a few main components: first, there is the forward pass, defined in . Next, there is a “backward” pass, which operates on the hidden states in a reverse order to propagate an error signal back down the network. In this process, a hidden state
is created for each layer. These h are tensors with a batch, neuron and feature index:
where . Weight updates to and are then readout from these and other signals found both locally, and in aggregate along both batch and neuron.g.4.1 Backward error propagation
In backprop, there exists a single scalar error signal that gets backpropagated back down the network. In our work, this error signal does not exist as there is no loss being optimized. Instead we have a learned topdown signal, , at the top of the network. Because we are no longer restricted to the form of backprop, we make this quantity a vector for each unit in the network, rather than a scalar,
(App.27)  
(App.28)  
(App.29) 
where . In this work we set . The architecture of is a neural network that operates along every dimension of . The specification can be found in G.7.
We structure our error propagation similarly to the structure of the backpropagated error signal in a standard MLP with contributions to the error at every layer in the network,
(App.31) 
where has the same shape as , and and . Note both and are shared for all layers .
In a similar way to backprop, we move this signal down the network via multiplying by a backward weight matrix (). We do not use the previous weight matrix transpose as done in backprop, instead we learn a separate set of weights that are not tied to the forward weights and updated along with the forward weights as described in G.5
. Additionally, we normalize the signal to have fixed second moment,
(App.32)  
(App.33) 
The internal vectors are computed via:
(App.34) 
The architecture of is a neural network that operates on every dimension of all the inputs. It can be found in G.8. These definitions are recursive, and are computed in order: . With these computed, weight updates can be read out (Section G.5). When the corresponding symbols are not defined (e.g. ) a zeros tensor with the correct shape is used instead.
g.5 Weight Updates
The following is the implementation of
For a given layer, , our weight updates are a mixture of multiple low rank readouts from and . These terms are then added together with a learnable weight, in to form a final update. The final update is then normalized mixed with the previous weights. We update both the forward, , and the backward, , using the same update rule parameters . We show the forward weight update rule here, and drop the backward for brevity.
For convenience, we define a low rank readout function that takes like tensors, and outputs a single lower rank tensor.
(App.35)  
(App.36) 
Here, , is a placehoder for the parameters of the given readout. is defined as:
(App.37)  
(App.38)  
(App.39)  
(App.40) 
where and is the rank of the readout matrix (per batch element).
g.5.1 Local terms
This sequence of terms allow the weight to be adjusted as a function of state in the pre and postsynaptic neurons. They should be viewed as a basis function representation of the way in which the weight changes as a function of pre and postsynaptic neurons, and the current weight value. We express each of these contributions to the weight update as a sequence of weight update planes, with the th plane written . Each of these planes will be linearly summed, with coefficients generated as described in Equation App.57, in order to generate the eventual weight update.
(App.41)  
(App.42)  
(App.43)  
(App.44)  
(App.45)  
(App.46)  
(App.47) 
g.5.2 Decorrelation terms
Additional weight update planes are designed to aid units in remaining decorrelated from each other’s activity, and in decorrelating their receptive fields. Without terms like this, a common failure mode is for many units in a layer to develop nearidentical representations. Here, indicates a scratch matrix associated with weight update plane and layer .
(App.48)  
(App.49)  
(App.50)  
(App.51)  
(App.52)  
(App.53)  
(App.54)  
(App.55) 
g.6 Application of the weight terms in the optimizer
We then normalize, reweight, and merge each of these weight update planes into a single term, which will be used in the weight update step,
(App.57)  
(App.58) 
where (as we have 10 input planes).
To prevent pathologies during training, we perform two post processing steps to prevent the learned optimizer from cheating, and increasing its effective learning rate, leading to instability. We only allow updates which do not decrease the weight matrix magnitude,
(App.59) 
where is scaled to have unit norm, and we normalize the length of the update,
(App.60) 
To compute changes in the biases, we do a readout from . We put some constraints on this update to prevent the biases from pushing all units into the linear regime, and minimizing learning. We found this to be a possible pathology.
(App.61)  
(App.62) 
where .
We then normalize the update via the second moment:
(App.63) 
Finally, we define as all layer’s forward weight updates, and backward weight updates, and bias updates.
(App.64) 
g.7
This function performs various convolutions over the batch dimension and data dimension. For ease of notation, we use as an intermediate variable. Additionally, we drop all convolution and batch norm parameterizations. They are all separate elements of . We define two 1D convolution operators that act on rank 3 tensors: which performs convolutions over the zeroth index, and which performs convolutions along the firs index. We define the argument to be size of hidden units, and the argument to be the kernel size of the 1D convolutions. Additionally, we set the second argument of to be the axis normalized over.
(App.65) 
First, we reshape to add another dimension to the end of , in pseudocode:
(App.66) 
Next, a convolution is performed on the batch dimension with a batch norm and a ReLU non linearity. This starts to pull information from around the batch into these channels.
(App.67)  
(App.68) 
We set . Next, a series of unit convolutions (convolutions over the last dimension) are performed. These act as compute over information composed from the nearby elements of the batch. These unit dimensions effectively rewrite the batch dimension. This restricts the model to operate on a fixed size batch.
(App.70)  
(App.71)  
(App.72)  
(App.73) 
Next a series of 1D convolutions are performed over the batch dimension for more compute capacity.
(App.74)  
(App.75)  
(App.76)  
(App.77) 
Finally, we convert the representations to the desired dimensions and output.
(App.78)  
(App.79) 
g.8
This is the main computation performed while transfering signals down the network and the output is directly used for weight updates. It is defined as:
(App.80) 
The outputs of the base model, (, ), plus an additional positional embeddings are stacked then concatenated with , to form a tensor in :
(App.81)  
(App.82)  
(App.83)  
(App.84) 
Statistics across the batch and unit dimensions, 0 and 1, are computed. We define a function bellow. We have 2 instances for both the zeroth and the first index, shown bellow is the zeroth index and the first is omitted.
Comments
There are no comments yet.