1 Introduction
In deep learning, the convolutional neural network (CNN)
LeCun et al. (1998) is a prime example of exploiting equivariance to a symmetry transformation to conserve parameters and improve generalization. In image classification Russakovsky et al. (2015); Krizhevsky et al. (2012) and audio processing Graves and Jaitly (2014); Hannun et al. (2014) tasks, we may expect the layers of a deep network to learn feature detectors that are translation equivariant: if we translate the input, the output feature map is also translated. Convolution layers satisfy translation equivariance by definition, and produce remarkable results on these tasks. The success of convolution’s “built in” inductive bias suggests that we can similarly exploit otherequivariances to solve machine learning problems.
However, there are substantial challenges with building in inductive biases. Identifying the correct biases to build in is challenging, and even if we do know the correct biases, it is often difficult to build them into a neural network. Practitioners commonly avoid this issue by “training in” desired equivariances (usually the special case of invariances) using data augmentation. However, data augmentation can be challenging in many problem settings and we would prefer to build the equivariance into the network itself. Additionally, building in incorrect biases may actually be detrimental to final performance Liu et al. (2018b).
In this work we aim for an approach that can automatically learn and encode equivariances into a neural network. This would free practitioners from having to design custom equivariant architectures for each task, and allow them to transfer any learned equivariances to new tasks. Neural network layers can achieve various equivariances through parameter sharing patterns, such as the spatial parameter sharing of standard convolutions. The particular sharing pattern depends on the equivariance, and in this paper we reparameterize network layers to learnably represent sharing patterns. We leverage metalearning to learn the sharing patterns that help a model to generalize on new tasks.
The primary contribution of this paper is a general approach to automatically learn and build in equivariances to symmetries observed in data, without requiring custom designed equivariant architectures and using only generic neural network components. We show theoretically that this method can produce networks equivariant to any finite symmetry group. Our experiments show that our method can not only recover various convolutional architectures from data, but also learns invariances to a variety of transformations usually obtained via data augmentation.
2 Related Work
A number of works have studied designing layers with equivariances to certain transformations such as permutation, rotation, reflection, and scaling Gens and Domingos (2014); Cohen and Welling (2016); Zaheer et al. (2017); Worrall et al. (2017); Cohen et al. (2019); Weiler and Cesa (2019); Worrall and Welling (2019). These approaches focus on manually constructing layers analagous to standard convolution, but equivariant to other symmetry groups besides translation. More theoretical work has characterized the nature of equivariant layers for various symmetry groups Kondor and Trivedi (2018); ShaweTaylor (1989); Ravanbakhsh et al. (2017). Rather than building symmetries into the architecture, data augmentation Beymer and Poggio (1995); Niyogi et al. (1998) trains a network to satisfy them. There is also a hybrid approach that pretrains a basis of rotated filters in order to define rototranslation equivariant convolution Diaconu and Worrall (2019). Unlike these works, we aim to automatically build in symmetries by acquiring them from data.
Prior work on automatically learning symmetries is more sparse, and includes works that focus on Gaussian processes van der Wilk et al. (2018) and symmetries of physical systems Greydanus et al. (2019); Cranmer et al. (2020). Automatically discovering data augmentation strategies Cubuk et al. (2018); Lorraine et al. (2019) can also be considered a way of learning symmetries, but does not embed these symmetries into the model itself.
Our work is related to neural network architecture search Zoph and Le (2016); Brock et al. (2017); Liu et al. (2018a); Elsken et al. (2018), which also aims to automate part of the model design process. Although architecture search methods are varied, they are generally not designed to exploit symmetry or learn equivariances. Evolutionary methods for learning both network weights and topology Stanley and Miikkulainen (2002); Stanley et al. (2009) are also not motivated by symmetry considerations.
Our method learns to exploit symmetries that are shared by a collection of tasks, a form of metalearning Thrun and Pratt (2012); Schmidhuber (1987); Bengio et al. (1992); Hochreiter et al. (2001). We extend gradient based metalearning Finn et al. (2017); Li et al. (2017); Antoniou et al. (2018) to separately learn parameter sharing patterns (which enforce equivariance) and actual parameter values. Separately representing network weights in terms of a sharing pattern and parameter values is a form of reparameterization. Prior work has used weight reparameterization in order to “warp” the loss surface Lee and Choi (2018); Flennerhag et al. (2019) and to learn good latent spaces Rusu et al. (2018) for optimization, rather than to encode equivariance. HyperNetworks Ha et al. (2016) generate network layer weights using a separate smaller network, which can be viewed as a nonlinear reparameterization, albeit not one that encourages learning equivariances. Modular metalearning Alet et al. (2018) is a related technique that aims to achieve combinatorial generalization on new tasks by stacking metalearned “modules,” each of which is a neural network. This can be seen as parameter sharing by reusing and combining these modules, rather than by layerwise reparameterization as in our work.
3 Preliminaries
In Sec. 3.1, we review gradient based metalearning, which underlies our algorithm. Sections 3.2 and 3.3 build up a formal definition of equivariance and group convolution Cohen and Welling (2016), a generalization of standard convolution which defines equivariant operations for other groups such as rotation and reflection. These concepts are important for a theoretical understanding of our work as a method for learning group convolutions in Sec. 4.2.
3.1 Gradient Based MetaLearning
Our method is a gradientbased metalearning algorithm that extends MAML Finn et al. (2017), which we briefly review here. Suppose we have some task distribution , where each task is split into training and validation datasets . For a model with parameters , loss , and learning rate , the “inner loop” updates using the task’s training data, shown here with one gradient descent step:
(1) 
In the “outer loop,” MAML metalearns a good initialization using the loss of the new parameters on the task’s validation data:
(2) 
Although MAML focuses on metalearning the inner loop initialization , one can extend this idea to metalearning other things such as the inner learning rate . In our method, we metalearn a parameter sharing pattern at each layer that maximizes performance across the task distribution.
3.2 Groups and Group Actions
Symmetry and equivariance is usually studied in the context of groups and their actions on sets; refer to Dummit and Foote (2004) for more comprehensive coverage. A group is a set closed under some associative binary operation, where there is an identity element and each element has an inverse. For example, consider the group
: we can add any two vectors of
to obtain another, each vector has an additive inverse, and the zero vector is the identity.A group can act on a set through some action which maps each to some transformation on . must be a homomorphism, i.e. for all , and is the set of automorphisms on (bijective homomorphisms from to itself). As shorthand we often write for any . Any group can act on itself by letting : for , we define the action for any .
To define group convolution it is useful to consider a layer’s inputs and outputs as functions on some underlying space . For example, a vector in is a function mapping indices to real numbers. We denote the vector space of all realvalued functions on by . For group , we can define how acts on by a representation which maps each
to an invertible linear transformation on
. If already acts on , the “natural” representation of the action is . To provide a concrete example, consider images as functions mapping pixel locations to intensities, and let act on by translation as defined above. Then for any . Hence this representation of on images translates images by translating their underlying space (the 2D plane).3.3 Equivariance and Convolution
A function (like a neural network layer) is equivariant to some transformation if transforming the function’s input is the same as transforming its output. To be more precise, we must define what those transformations of the input and output are. Consider a neural network layer as a function on functions for two underlying spaces . Assume we have some group and two representations , where defines how transforms the input, while defines how transforms the output. We define equivariance with respect to these transformations:
(3) 
If we choose we get , showing that invariance is a type of equivariance.
Deep networks contain many layers, but fortunately function composition preserves equivariance. So if we achieve equivariance in each individual layer, the whole network will be equivariant. Pointwise nonlinearities such as ReLU and sigmoid are already equivariant to any permutation of the input and output indices, which includes translation, reflection, and rotation. Hence we are primarily focused on enforcing equivariance in the
linear layers.Prior work Kondor and Trivedi (2018) has shown that a linear layer is equivariant to the action of some group if and only if it is a group convolution, which generalizes standard convolutions to arbitrary groups. For a specific , we call the corresponding group convolution “convolution” to distinguish it from standard convolution. Intuitively, convolution transforms a filter according to each , then computes a dot product between the transformed filter and the input. In standard convolution, the filter transformations correspond to translation (Fig. 1). More formally, assume is a finite set. equivariant layers convolve an input with a filter :
(4) 
In this work, we present a method that represents and learns parameter sharing patterns for existing layers, such as fully connected layers. These sharing patterns can force the layer to implement various group convolutions, and hence equivariant layers.
4 Encoding and Learning Equivariance
In order to learn equivariances automatically, our method introduces a flexible representation that can encode possible equivariances, and an algorithm for learning which equivariances to encode. Here we describe this method, which we call Metalearning Symmetries by Reparameterization (MSR).
4.1 Learnable Parameter Sharing
As Fig. 1 shows, a fully connected layer can implement standard convolution if its weight matrix is constrained with a particular sharing pattern, where each row contains a translated copy of the same underlying filter parameters. This idea generalizes to equivariant layers for other transformations like rotation and reflection, but the sharing pattern depends on the transformation. Since we do not know the sharing pattern a priori, we “reparameterize” fully connected weight matrices to represent them in a general and flexible fashion. A fully connected layer has weight matrix :
(5) 
We can optionally incorporate biases by appending a dimension with value “1” to the input . We factorize as the product of a “symmetry matrix” and a vector of “filter parameters”:
(6) 
For fully connected layers, we reshape the vector into a weight matrix . Intuitively, encodes the pattern by which the weights will “share” the filter parameters . Crucially, we can now separate the problem of learning the sharing pattern (learning ) from the problem of learning the filter parameters . In Sec. 4.3, we discuss how to learn from data.
The symmetry matrix for each layer has entries, which can become too expensive in larger layers. Kronecker factorization is a common approach for approximating a very large matrix with smaller ones Martens and Grosse (2015); Park and Oliva (2019). In Appendix A we describe how we apply Kronecker approximation to Eq. 6, and analyze memory and computation efficiency.
In practice, there are certain equivariances that would be expensive to metalearn, but that we know to be useful: for example, standard 2D convolutions for image data. However, there may be still other symmetries of the data (i.e., rotation, scaling, reflection, etc.) that we still wish to learn automatically. This suggests a “hybrid” approach, where we bakein certain equivariances we know to be useful, and learn the others. Indeed, we can directly reparameterize a standard convolution layer by reshaping into a convolution filter bank rather than a weight matrix. By doing so we bake in translational equivariance, but we can still learn things like rotation equivariance from data.
4.2 Parameter sharing and group convolution
Here we show that by properly choosing the symmetry matrix of Eq. 6, we can force the layer to implement arbitrary group convolutions (Eq. 4) by filter . Recall that group convolutions generalize standard convolution to define operations that are equivariant to other groups, such as rotation and permutation. Hence by choosing properly we can enforce arbitrary equivariances, which will be preserved regardless of the value of .
For simplicity, we’ll work with an input of the form , although the result easily generalizes to multichannel inputs . We assume that has finite support on and can therefore can be represented as a vector , where . In practice, this is always the case: for example, a discretized image is only nonzero at a finite number of pixel locations. Then we can formalize our claim:
Proposition 1
Suppose is a finite group . There exists a such that for any , the layer with weights implements convolution on input . Moreover, with this fixed choice of , any convolution can be represented by a weight matrix for some
We present a proof in Appendix B. Intuitively, can store the symmetry transformations for each , thus capturing how the filters should transform during convolution. For example, Fig. 2 shows how to choose to implement convolution on the permutation group .
Subject to having a correct , is precisely the convolution filter in a convolution. This will motivate the notion of separately learning the convolution filter and the symmetry structure in the inner and outer loops of a metalearning process, respectively.
4.3 Metalearning equivariances
Metalearning generally applies when we want to learn and exploit some shared structure in a distribution of tasks . In this case, we assume the task distribution has some common underlying symmetry: i.e., models trained for each task should satisfy some set of shared equivariances. We extend gradient based metalearning to automatically learn those equivariances.
Suppose we have a layer network, we first collect each layer’s symmetry matrix and filter parameters:
(7) 
Since we aim to learn equivariances that are shared across ,the symmetry matrices should not change with the task. Hence, for any the inner loop fixes and only updates using the task training data:
(8) 
where
is simply the supervised learning loss, and
is the inner loop step size. During metatraining, the outer loop then computes the loss on the task’s validation data using , and update :(9) 
We illustrate the inner and outer loop updates in Fig. 3. Note that in addition to metalearning the symmetry matrices, we can also still metalearn the filter initialization as in prior work. In practice we also take outer updates averaged over minibatches of tasks, as we describe in Alg. 1.
After metatraining is complete, we freeze the symmetry matrices . On a new test task , we use the inner loop (Eq. 8) to update only the filter parameters . The frozen enforces the metalearned equivarianceinducing parameter sharing in each layer. This sharing improves generalization by reducing the number of taskspecific inner loop parameters. For example, the sharing pattern of standard convolution guarantees that the weight matrix is constant along any diagonal, reducing the number of pertask parameters (see Fig. 1).
5 Can we recover convolutional structure?
We now introduce a series of synthetic metalearning problems, where each problem contains regression tasks that are guaranteed to have some symmetries, such as translation, rotation, or reflection. We combine metalearning methods with general architectures not designed with these symmetries in mind to see whether each method can automatically metalearn these equivariances.
5.1 Learning (partial) translation symmetry
Our first batch of synthetic problems contains tasks with translational symmetry: we generate regression data by feeding random input vectors to a 1D locally connected (LC) layer with filter size to generate output vectors. Each task corresponds to the values of the LC filter, and the metalearner must minimize mean squared error (MSE) after observing a single inputoutput pair. For each problem we constrain the LC filter weights with a rank factorization Elsayed et al. (2020), implementing a form of partial translation symmetry. In the extreme case where rank , the LC layer is equivalent to convolution (ignoring the biases) and thus generates exactly translation equivariant task data. We apply both MSR and MAML to this problem using a single layer fully connected model (MSRFC and MAMLFC, respectively), so these models have no translational equivariance built in and must metalearn it to solve the tasks efficiently. For comparison, we also train convolutional and locally connected models with MAML (MAMLConv and MAMLLC, respectively). Since MAMLConv’s architecture builds in translation equivariance, we expect it to at least perform well on the rank problem. We train each method to convergence on the metatraining tasks of each problem, then evaluate the metatest MSE. Appendix D.1 further explains the experimental setup.
Table 1 shows how each method performs on each of the synthetic problems, listed by column denoting the rank of the LC filter generating task data. On completely translation equivariant data (), MSRFC performs comparably to MAMLConv despite not having built in symmetry assumptions. MSRFC actually metalearns symmetry matrices that enforce convolutional sharing structure on the weights (Fig. 4), essentially “learning convolution” from translationequivariant data. In Appendix C we inspect the metalearned symmetry matrix , which we find implements convolution using filter just as Sec. 4.2 predicted. Meanwhile, MAMLFC and MAMLLC perform significantly worse as they are unable to metalearn this structure.
On partially symmetric data (rank or ), MSRFC outperforms all other methods due to its ability to flexibly metalearn even partial symmetries. MAMLConv performs worse in these settings since the convolution assumption is overly restrictive, while MAMLFC and MAMLLC are not able to metalearn much structure at all.
5.2 Learning equivariance to rotations and flips
Rotation/Flip Equivariance MSE  

Method  Rot  Rot+Flip 
MSRConv (Ours)  .004  .001 
MAMLConv  .504  .507 
We also created synthetic problems with 2D synthetic image inputs and outputs, in order to study rotation and flip equivariance. We generate task data by passing randomly generated inputs through a single layer E(2)equivariant steerable CNN Weiler and Cesa (2019) configured to be equivariant to combinations of translations, discrete rotations by increments of 45°, and reflections. Hence our synthetic task data contains rotation and reflection in addition to translation symmetry. Each task corresponds to different values of the datagenerating network’s weights. We apply MSR and MAML to a single standard convolution layer. By reparameterizing a convolution layer, we have already guaranteed translational equivariance, but each method must still metalearn rotation and reflection (flip) equivariance from the data. Table 2 shows that MSR easily learns rotation and rotation+reflection equivariance on top of the convolutional model’s built in translational equivariance.
6 Can we learn invariances from augmented data?
Practitioners commonly use data augmentation to train their models to have certain invariances. Since invariance is a special case of equivariance, we can also view data augmentation as a way of learning equivariant models. The downside is that we need augmented data for each task. While augmentation is often possible during metatraining, there are many situations where its impractical at metatest time. For example, in robotics we may metatrain a robot in simulation and then deploy (metatest) in the real world, a kind of sim2real transfer strategy Song et al. (2020). During metatraining we can augment data using the simulated environment, but we cannot do the same at metatest time in the real world.
Can we instead use MSR to learn equivariances from data augmentation at training time, and encode those learned equivariances into the network itself? This way, the network would preserve learned equivariances on new metatest tasks without needing any additional data augmentation.
Alg. 2 describes our approach for metalearning invariances from data augmentation, which wraps around any metalearning algorithm using generic data augmentation procedures. Recall that each task is split into training and validation data . We use the data augmentation procedure to only modify the validation data, producing a new validation dataset for each task. We reassemble each modified task using the original training data and modified validation data . For each task, the metalearner observes unaugmented training data, but must generalize to perform well on augmented validation data. This forces the model to be invariant to the augmentation transforms without actually seeing any augmented training data.
We apply this augmentation strategy to Omniglot Lake et al. (2015) and MiniImagenet Vinyals et al. (2016) few shot classification to create the AugOmniglot and AugMiniImagenet benchmarks. Our data augmentation function contains a combination of random rotations, flips, and resizes (rescaling), which we apply only to each task’s validation data as described above. Aside from the augmentation procedure, the benchmarks are identical to prior work Finn et al. (2017)
: for each task, the model must classify images into one of either
or classes (way) and receives either or examples of each class in the task training data (shot).We tried combining Alg. 2 with our MSR method and three other metalearning algorithms: MAML Finn et al. (2017), ANIL Raghu et al. (2019), and Prototypical Networks (ProtoNets) Snell et al. (2017). While the latter three methods all have the potential to learn equivariant features through Alg. 2, we hypothesize that since MSR enforces learned equivariance through its symmetry matrices it should outperform those featuremetalearning methods. Appendix D.2 describes the experimental setup and methods implementations in more detail.
Table 3 shows each method’s metatest accuracies on both benchmarks. Across different settings MSR performs either comparably to the best method, or the best. MAML and ANIL perform similarly to each other, and usually worse than MSR, suggesting that learning equivariant or invariant features is not as helpful as learning equivariant layer structures. ProtoNets perform well on the easier AugOmniglot benchmark, but evidently struggle with learning a transformation invariant metric space on the harder AugMiniImagenet problems. Note that MSR’s reparameterization increases the number of metalearned parameters at each layer, so MSR models contain more total parameters than corresponding MAML models. The “MAML (Big)” results show MAML performance with very large models containing more total parameters than the corresponding MSR models. The results show that MSR also outperforms these larger MAML models despite having fewer total parameters.
AugOmniglot  AugMiniImagenet  

5 way  20 way  5 way  
Method  1shot  5shot  1shot  5shot  1shot  5shot 
MAML  
MAML (Big)  
ANIL  
ProtoNets  
MSR (Ours) 
7 Discussion and Future Work
We introduce a method for automatically metalearning equivariances in neural network models, by encoding learned equivarianceinducing parameter sharing patterns in each layer. On new tasks, these sharing patterns reduce the number of taskspecific parameters and improve generalization. Our experiments show that this method can improve fewshot generalization on task distributions with shared underlying symmetries. We also introduce a strategy for metatraining invariances into networks using data augmentation, and show that it works well with our method. By encoding equivariances into the network as a parameter sharing pattern, our method has the benefit of preserving learned equivariances on new tasks so it can learn more efficiently.
Machine learning thus far has benefited from exploiting human knowledge of problem symmetries, and we believe this work presents a step towards learning and exploiting symmetries automatically. This work leads to numerous directions for future investigation. In addition to generalization benefits, standard convolution is practical since it exploits the parameter sharing structure to improve computational efficiency, relative to a fully connected layer of the same input/output dimensions. While MSR we can improve computational efficiency by reparameterizing known structured layers (such as standard convolution), it does not exploit learned structure to further optimize its computation. Can we automatically learn or find efficient implementations of these more structured operations? Additionally, our method is best for learning symmetries which are shared across a distribution of tasks. Further research on quickly discovering symmetries which are particular to a single task would make deep learning methods significantly more useful on many difficult real world problems.
We would like to thank Sam Greydanus, Archit Sharma, and Yiding Jiang for reviewing and critiquing earlier drafts of this paper. This work was supported in part by Google. CF is a CIFAR Fellow.
References
 Alet et al. (2018) F. Alet, T. LozanoPérez, and L. P. Kaelbling. Modular metalearning. arXiv preprint arXiv:1806.10166, 2018.
 Antoniou et al. (2018) A. Antoniou, H. Edwards, and A. Storkey. How to train your maml. arXiv preprint arXiv:1810.09502, 2018.
 Bengio et al. (1992) S. Bengio, Y. Bengio, J. Cloutier, and J. Gecsei. On the optimization of a synaptic learning rule. In Preprints Conf. Optimality in Artificial and Biological Neural Networks, volume 2. Univ. of Texas, 1992.

Beymer and Poggio (1995)
D. Beymer and T. Poggio.
Face recognition from one example view.
In
Proceedings of IEEE International Conference on Computer Vision
, pages 500–507. IEEE, 1995.  Brock et al. (2017) A. Brock, T. Lim, J. M. Ritchie, and N. Weston. Smash: oneshot model architecture search through hypernetworks. arXiv preprint arXiv:1708.05344, 2017.
 Cohen and Welling (2016) T. Cohen and M. Welling. Group equivariant convolutional networks. In International conference on machine learning, pages 2990–2999, 2016.
 Cohen et al. (2019) T. S. Cohen, M. Weiler, B. Kicanaoglu, and M. Welling. Gauge equivariant convolutional networks and the icosahedral cnn. arXiv preprint arXiv:1902.04615, 2019.
 Cranmer et al. (2020) M. Cranmer, S. Greydanus, S. Hoyer, P. Battaglia, D. Spergel, and S. Ho. Lagrangian neural networks. arXiv preprint arXiv:2003.04630, 2020.
 Cubuk et al. (2018) E. D. Cubuk, B. Zoph, D. Mané, V. Vasudevan, and Q. V. Le. Autoaugment: Learning augmentation policies from data. corr abs/1805.09501 (2018). arXiv preprint arXiv:1805.09501, 2018.

Deleu et al. (2019)
T. Deleu, T. Würfl, M. Samiei, J. P. Cohen, and Y. Bengio.
Torchmeta: A MetaLearning library for PyTorch, 2019.
URL https://arxiv.org/abs/1909.06576. Available at: https://github.com/tristandeleu/pytorchmeta.  Diaconu and Worrall (2019) N. Diaconu and D. E. Worrall. Learning to convolve: A generalized weighttying approach. arXiv preprint arXiv:1905.04663, 2019.
 Dummit and Foote (2004) D. S. Dummit and R. M. Foote. Abstract algebra, volume 3. Wiley Hoboken, 2004.
 Elsayed et al. (2020) G. F. Elsayed, P. Ramachandran, J. Shlens, and S. Kornblith. Revisiting spatial invariance with lowrank local connectivity. arXiv preprint arXiv:2002.02959, 2020.
 Elsken et al. (2018) T. Elsken, J. H. Metzen, and F. Hutter. Neural architecture search: A survey. arXiv preprint arXiv:1808.05377, 2018.
 Finn et al. (2017) C. Finn, P. Abbeel, and S. Levine. Modelagnostic metalearning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 1126–1135. JMLR. org, 2017.
 Flennerhag et al. (2019) S. Flennerhag, A. A. Rusu, R. Pascanu, H. Yin, and R. Hadsell. Metalearning with warped gradient descent. arXiv preprint arXiv:1909.00025, 2019.
 Gens and Domingos (2014) R. Gens and P. M. Domingos. Deep symmetry networks. In Advances in neural information processing systems, pages 2537–2545, 2014.

Graves and Jaitly (2014)
A. Graves and N. Jaitly.
Towards endtoend speech recognition with recurrent neural networks.
In International conference on machine learning, pages 1764–1772, 2014.  Grefenstette et al. (2019) E. Grefenstette, B. Amos, D. Yarats, P. M. Htut, A. Molchanov, F. Meier, D. Kiela, K. Cho, and S. Chintala. Generalized inner loop metalearning. arXiv preprint arXiv:1910.01727, 2019.
 Greydanus et al. (2019) S. Greydanus, M. Dzamba, and J. Yosinski. Hamiltonian neural networks. In Advances in Neural Information Processing Systems, pages 15353–15363, 2019.
 Ha et al. (2016) D. Ha, A. Dai, and Q. V. Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
 Hannun et al. (2014) A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates, et al. Deep speech: Scaling up endtoend speech recognition. arXiv preprint arXiv:1412.5567, 2014.
 Hochreiter et al. (2001) S. Hochreiter, A. S. Younger, and P. R. Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pages 87–94. Springer, 2001.
 Kingma and Ba (2014) D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 Kolda and Bader (2009) T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM review, 51(3):455–500, 2009.
 Kondor and Trivedi (2018) R. Kondor and S. Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups, 2018.
 Krizhevsky et al. (2012) A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
 Lake et al. (2015) B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Humanlevel concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015.
 LeCun et al. (1998) Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 Lee and Choi (2018) Y. Lee and S. Choi. Gradientbased metalearning with learned layerwise metric and subspace. arXiv preprint arXiv:1801.05558, 2018.
 Li et al. (2017) Z. Li, F. Zhou, F. Chen, and H. Li. Metasgd: Learning to learn quickly for fewshot learning. arXiv preprint arXiv:1707.09835, 2017.
 Liu et al. (2018a) H. Liu, K. Simonyan, and Y. Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018a.
 Liu et al. (2018b) R. Liu, J. Lehman, P. Molino, F. P. Such, E. Frank, A. Sergeev, and J. Yosinski. An intriguing failing of convolutional neural networks and the coordconv solution. In Advances in Neural Information Processing Systems, pages 9605–9616, 2018b.
 Lorraine et al. (2019) J. Lorraine, P. Vicol, and D. Duvenaud. Optimizing millions of hyperparameters by implicit differentiation. arXiv preprint arXiv:1911.02590, 2019.
 Martens and Grosse (2015) J. Martens and R. Grosse. Optimizing neural networks with kroneckerfactored approximate curvature. In International conference on machine learning, pages 2408–2417, 2015.
 Niyogi et al. (1998) P. Niyogi, F. Girosi, and T. Poggio. Incorporating prior information in machine learning by creating virtual examples. Proceedings of the IEEE, 86(11):2196–2209, 1998.
 Park and Oliva (2019) E. Park and J. B. Oliva. Metacurvature. In Advances in Neural Information Processing Systems, pages 3309–3319, 2019.
 Raghu et al. (2019) A. Raghu, M. Raghu, S. Bengio, and O. Vinyals. Rapid learning or feature reuse? towards understanding the effectiveness of maml. arXiv preprint arXiv:1909.09157, 2019.
 Ravanbakhsh et al. (2017) S. Ravanbakhsh, J. Schneider, and B. Poczos. Equivariance through parametersharing. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 2892–2901. JMLR. org, 2017.
 Russakovsky et al. (2015) O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
 Rusu et al. (2018) A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell. Metalearning with latent embedding optimization. arXiv preprint arXiv:1807.05960, 2018.
 Schmidhuber (1987) J. Schmidhuber. Evolutionary principles in selfreferential learning. On learning how to learn: The metameta… hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich, 1:2, 1987.
 ShaweTaylor (1989) J. ShaweTaylor. Building symmetries into feedforward networks. In 1989 First IEE International Conference on Artificial Neural Networks,(Conf. Publ. No. 313), pages 158–162. IET, 1989.
 Snell et al. (2017) J. Snell, K. Swersky, and R. Zemel. Prototypical networks for fewshot learning. In Advances in neural information processing systems, pages 4077–4087, 2017.
 Song et al. (2020) X. Song, Y. Yang, K. Choromanski, K. Caluwaerts, W. Gao, C. Finn, and J. Tan. Rapidly adaptable legged robots via evolutionary metalearning. arXiv preprint arXiv:2003.01239, 2020.
 Stanley and Miikkulainen (2002) K. O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary computation, 10(2):99–127, 2002.
 Stanley et al. (2009) K. O. Stanley, D. B. D’Ambrosio, and J. Gauci. A hypercubebased encoding for evolving largescale neural networks. Artificial life, 15(2):185–212, 2009.
 Thrun and Pratt (2012) S. Thrun and L. Pratt. Learning to learn. Springer Science & Business Media, 2012.
 van der Wilk et al. (2018) M. van der Wilk, M. Bauer, S. John, and J. Hensman. Learning invariances using the marginal likelihood. In Advances in Neural Information Processing Systems, pages 9938–9948, 2018.
 Vinyals et al. (2016) O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638, 2016.
 Weiler and Cesa (2019) M. Weiler and G. Cesa. General e (2)equivariant steerable cnns. In Advances in Neural Information Processing Systems, pages 14334–14345, 2019.
 Worrall and Welling (2019) D. Worrall and M. Welling. Deep scalespaces: Equivariance over scale. In Advances in Neural Information Processing Systems, pages 7364–7376, 2019.

Worrall et al. (2017)
D. E. Worrall, S. J. Garbin, D. Turmukhambetov, and G. J. Brostow.
Harmonic networks: Deep translation and rotation equivariance.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 5028–5037, 2017.  Zaheer et al. (2017) M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola. Deep sets. In Advances in neural information processing systems, pages 3391–3401, 2017.
 Zoph and Le (2016) B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
Appendix A Approximation and Tractability
a.1 Fully connected
From Eq. 6 we see that for a layer with output units, input units, and filter parameters the symmetry matrix has entries. This is too expensive for larger layers, so in practice, we need a factorized reparameterization to reduce memory and compute requirements when is larger.
For fully connected layers, we use a Kronecker factorization to scalably reparameterize each layer. First, we assume that the filter parameters can be arranged in a matrix . Then we reparameterize each layer’s weight matrix similar to Eq. 6, but assume the symmetry matrix is the Kronecker product of two smaller matrices:
(10) 
Since we only store the two Kronecker factors and , we reduce the memory requirements of from to . In our experiments we generally choose so and . Then the actual memory cost of each reparameterized layer (including both and ) is , compared to for a standard fully connected layer. So in the case where , MSR increases memory cost by roughly a constant factor of .
After approximation MSR also increases computation time (forward and backward passes) by roughly a constant factor of compared to MAML. A standard fully connected layer requires a single matrixmatrix multiply in the forward pass (here and are matrices since inputs and outputs are in batches). Applying the Kroneckervec trick to Eq. 10 gives:
(11) 
So rather than actually form the (possibly large) symmetry matrix , we can directly construct simply using additional matrixmatrix multiplies . Again assuming and , each matrix in the preceding expression is approximately the same size as .
a.2 2D Convolution
When reparameterizing 2D convolutions, we need to produce a filter (a rank tensor ). We assume the filter parameters are stored in a rank tensor , and factorize the symmetry matrix into three separate matrices and . A similar Kronecker product approximation gives:
(12)  
(13) 
where represents mode tensor multiplication Kolda and Bader [2009]. Just as in the fully connected case, this convolution reparameterization is equivalent to a Kronecker factorization of the symmetry matrix .
An analysis of the memory and computation requirements of reparameterized convolution layers proceeds similarly to the above analysis for the fully connected case. As we describe below, in our augmented experiments using convolutional models each MSR outer step takes roughly longer than a MAML outer step.
Appendix B Proof of Proposition 1
We’ll model an input signal as a function on some underlying space . We then consider a finite group of symmetries acting transitively on , over which we desire equivariance. Many (but not all) of the groups discussed in Weiler and Cesa [2019] are finite groups of this form.
It is proven by Kondor and Trivedi [2018] that a function is equivariant to if and only if it is a convolution. Following the original paper on groupequivariant CNNs Cohen and Welling [2016], we in fact consider a slight simplification of this notion: a finite “ crosscorrelation” of with a filter . This can be defined as:
(14) 
In order for fully connected layer’s weight matrix to act on function , we must first assume that has finite support —i.e. is only nonzero at these points within . This means that can be represented as a “dual” vector given by , on which can act.^{2}^{2}2This is using the natural linear algebraic dual of the free vector space on
We aim to show a certain value of allows arbitrary crosscorrelations—and only crosscorrelations—to be represented by fully connected layers with weight matrices of the form
(15) 
where is any arbitrary vector of appropriate dimension. The reshape specifically gives , which transforms the vector .
With this in mind, we first use that the action of the group can be represented as a matrix transformation on this vector space, using the matrix representation :
(16) 
where notably
We consider , and . Since , we can also treat as a the “dual” vector of a function with support , described by . We can interpret as a convolutional filter, just like in Eq. 14. then acts on just as it acts on , namely:
(17) 
Now, we define by stacking the matrix representations of :
(18) 
which implies the following value of
(19) 
This then grants that the output of the fully connected layer with weights is:
(20) 
Using that has finite support , and that , we have that:
(21) 
Lastly, we can interpret as a function mapping each to its component:
(22) 
which is precisely the crosscorrelation as described in Eq. 14 with filter . This implies that must be equivariant with respect to . Moreover, all such equivariant functions are crosscorrelations parameterized by , so with fixed as in Eq.18, we have that can represent all equivariant functions.
This means that if is chosen to be have the same dimension as the input, and the weight symmetry matrix is sufficiently large, any equivariance to a finite group can be metalearned. Moreover, in this case the symmetry matrix has a very natural and interpretable structure, containing a representation of the group in block submatrices. Lastly, notice that corresponds (dually) to the convolutional filter, justifying the notion that we learn the convolutional filter in the inner loop, and the group action in the outer group.
In the above proof, we’ve used the original definition of group convolution Cohen and Welling [2016] for the sake of simplicity. It is useful to note that a slight generalization of the proof applies for more general equivariance between representations, as defined in equation (3)—(i.e. the case when is an arbitrary linear transformation, and not necessarily of the form This is subject to a unitarity condition on the group representation Worrall and Welling [2019].
Without any modification to the method, arbitrary linear approximations to group convolution can be learnt when the group is not a subgroup of the symmetric group—i.e. when
does not consist purely of permutations of indices. For example, non axisaligned rotations can be easily approximated through both bilinear and bicubic interpolation, whereby the value of a pixel
after rotation is a linear interpolation of the 4 or 16 pixels nearest to the “true” value of this pixel before rotation . This allows us to practically learn groups like , which is generated by 45 degree rotations.Appendix C Visualizing the metalearned symmetry matrix
Fig. 6 visualizes the actual symmetry matrix that MSRFC metalearns from translation equivariant data. Each column is one of the submatrices corresponding to the action of the discrete translation group element on the filter . In other words, MSR automatically metalearned to contain these submatrices such that each translations the filter by spaces, effectively metalearning standard convolution! In the actual symmetry matrix the submatrices are stacked on top of each other as in Eq. 18, but we display each submatrix sidebyside for easy visualization. The figure is also cropped for space: there are a total of submatrices but we show only the first , and each submatrix is cropped from to .
Appendix D Experimental details
Throughout this work we implemented all gradient based metalearning algorithms in PyTorch using the Higher Grefenstette et al. [2019] library.
d.1 Synthetic Problems
In the synthetic problems we generated regression data using either a single locally connected layer (Sec. 5.1) or a single E(2)steerable layer (Sec. 5.2
). Each task corresponds to different weights of the data generating network, whose entries we sample independently from a standard normal distribution. For rank
locally connected filters we sampled width filters and then set the filter value at each spatial location to be a random linear combination of those filters. We generated tasks for each synthetic problem and randomly split them into metatrain and metatest tasks.For each particular task, we generated data points by randomly sampling the input vector or “image” entries from a standard normal distribution, passing the input vector into the data generating network, and saving the input and output as a pair. We then randomly split the task data into task training data (1 data point) and task validation data (19 datapoints). Hence the model has to solve each task after viewing a single task training datapoint.
During metatraining we trained each method for steps, which was sufficient for the training loss to converge for every method in every problem. We used the Adam Kingma and Ba [2014] optimizer in the outer loop with learning rate . In the inner loop we used SGD with metalearned perlayer learning rates, initialized to . We used a single inner loop step for all experiments, and a task batch size of during metatraining. At metatest time we evaluated average performance and error bars using random heldout tasks.
We ran all experiments on a single machine with a single NVidia RTX 2080Ti GPU. Our MSRFC experiments took about (outer loop) steps per second, while our MSRConv experiments took about (outer loop) steps per second.
d.2 Augmentation Experiments
To create AugOmniglot and AugMiniImagenet, we extended the Omniglot and MiniImagenet benchmarks from TorchMeta Deleu et al. [2019]. Each task in these benchmarks is split into support (train) and query (validation) datasets. For the augmented benchmarks we applied data augmentation to only the query dataset of each task, which consisted of randomly resized crops, reflections, and rotations by up to . Using the torchvision library, the augmentation function is:
For all experiments except MiniImagenet 5shot, MAML and MSR used exactly the same convolutional architecture (same number of layers, number of channels, filter sizes, etc.) as prior work on Omniglot and MiniImagenet Vinyals et al. [2016]. For MSR we reparameterize each layer’s weight matrix or convolutional filter. For MiniImagenet 5shot, we found that increasing architecture size helped both methods: for the first 3 convolution layers, we increased the number of output channels to and increased the kernel size to . We then inserted a convolution layer with output channels right before the linear output layer. For fair comparison we also increased the ProtoNet architecture size on MiniImagenet 5shot, using output channels at each layer. We found that increasing the kernel size to at each layer in the ProtoNet worsened performance, so we left it at .
For “MAML (Big)” experiments we increased the architecture size of the MAML model to exceed the number of metaparameters (symmetry matrices + filter parameters) in the corresponding MSR model. For MiniImagenet 5Shot we increased the number of output channels at each of the convolution layers to , then inserted an additional linear layer with output units before the final linear layer. For MiniImagenet 1Shot we increased the number of output channels at each of the convolution layers to , then inserted an additional linear layer with output units before the final linear layer. For the Omniglot experiments we increased the number of output channels at each of the convolution layers to .
For all experiments and gradient based methods we trained for (outer) steps using the Adam optimizer with learning rate for MiniImagenet 5shot and for all other experiments. In the inner loop we used SGD with metalearned perlayer learning rates initialized to for Omniglot and for MiniImagenet. We metatrained using a single inner loop step in all experiments, and used inner loop steps at metatest time. Although MAML originally metatrained with inner loop steps on MiniImagenet, we found that this destabilized metatraining on our augmented version. We hypothesize that this is due to the discrepancy between support and query data in our augmented problems. During metatraining we used a task batch size of for Omniglot and for MiniImagenet. At metatest time we evaluated average performance and error bars using heldout metatest tasks.
We ran all experiments on a machine with a single NVidia Titan RTX GPU. For our AugOmniglot, we ran two experiments at simultaneously on the same machine, which likely slowed each invididual experiment down. Our MSR method took about steps per second, whereas the MAML baseline took about steps per second. For AugMiniimagenet we ran one experiment per machine. MSR took steps per second, while MAML took steps per second on these experiments.
Comments
There are no comments yet.