Meta-Learning Symmetries by Reparameterization

07/06/2020 ∙ by Allan Zhou, et al. ∙ Stanford University 5

Many successful deep learning architectures are equivariant to certain transformations in order to conserve parameters and improve generalization: most famously, convolution layers are equivariant to shifts of the input. This approach only works when practitioners know a-priori symmetries of the task and can manually construct an architecture with the corresponding equivariances. Our goal is a general approach for learning equivariances from data, without needing prior knowledge of a task's symmetries or custom task-specific architectures. We present a method for learning and encoding equivariances into networks by learning corresponding parameter sharing patterns from data. Our method can provably encode equivariance-inducing parameter sharing for any finite group of symmetry transformations, and we find experimentally that it can automatically learn a variety of equivariances from symmetries in data. We provide our experiment code and pre-trained models at



There are no comments yet.


page 6

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In deep learning, the convolutional neural network (CNN)

LeCun et al. (1998) is a prime example of exploiting equivariance to a symmetry transformation to conserve parameters and improve generalization. In image classification Russakovsky et al. (2015); Krizhevsky et al. (2012) and audio processing Graves and Jaitly (2014); Hannun et al. (2014) tasks, we may expect the layers of a deep network to learn feature detectors that are translation equivariant: if we translate the input, the output feature map is also translated. Convolution layers satisfy translation equivariance by definition, and produce remarkable results on these tasks. The success of convolution’s “built in” inductive bias suggests that we can similarly exploit other

equivariances to solve machine learning problems.

However, there are substantial challenges with building in inductive biases. Identifying the correct biases to build in is challenging, and even if we do know the correct biases, it is often difficult to build them into a neural network. Practitioners commonly avoid this issue by “training in” desired equivariances (usually the special case of invariances) using data augmentation. However, data augmentation can be challenging in many problem settings and we would prefer to build the equivariance into the network itself. Additionally, building in incorrect biases may actually be detrimental to final performance Liu et al. (2018b).

In this work we aim for an approach that can automatically learn and encode equivariances into a neural network. This would free practitioners from having to design custom equivariant architectures for each task, and allow them to transfer any learned equivariances to new tasks. Neural network layers can achieve various equivariances through parameter sharing patterns, such as the spatial parameter sharing of standard convolutions. The particular sharing pattern depends on the equivariance, and in this paper we reparameterize network layers to learnably represent sharing patterns. We leverage meta-learning to learn the sharing patterns that help a model to generalize on new tasks.

The primary contribution of this paper is a general approach to automatically learn and build in equivariances to symmetries observed in data, without requiring custom designed equivariant architectures and using only generic neural network components. We show theoretically that this method can produce networks equivariant to any finite symmetry group. Our experiments show that our method can not only recover various convolutional architectures from data, but also learns invariances to a variety of transformations usually obtained via data augmentation.

2 Related Work

A number of works have studied designing layers with equivariances to certain transformations such as permutation, rotation, reflection, and scaling Gens and Domingos (2014); Cohen and Welling (2016); Zaheer et al. (2017); Worrall et al. (2017); Cohen et al. (2019); Weiler and Cesa (2019); Worrall and Welling (2019). These approaches focus on manually constructing layers analagous to standard convolution, but equivariant to other symmetry groups besides translation. More theoretical work has characterized the nature of equivariant layers for various symmetry groups Kondor and Trivedi (2018); Shawe-Taylor (1989); Ravanbakhsh et al. (2017). Rather than building symmetries into the architecture, data augmentation Beymer and Poggio (1995); Niyogi et al. (1998) trains a network to satisfy them. There is also a hybrid approach that pre-trains a basis of rotated filters in order to define roto-translation equivariant convolution Diaconu and Worrall (2019). Unlike these works, we aim to automatically build in symmetries by acquiring them from data.

Prior work on automatically learning symmetries is more sparse, and includes works that focus on Gaussian processes van der Wilk et al. (2018) and symmetries of physical systems Greydanus et al. (2019); Cranmer et al. (2020). Automatically discovering data augmentation strategies Cubuk et al. (2018); Lorraine et al. (2019) can also be considered a way of learning symmetries, but does not embed these symmetries into the model itself.

Our work is related to neural network architecture search Zoph and Le (2016); Brock et al. (2017); Liu et al. (2018a); Elsken et al. (2018), which also aims to automate part of the model design process. Although architecture search methods are varied, they are generally not designed to exploit symmetry or learn equivariances. Evolutionary methods for learning both network weights and topology Stanley and Miikkulainen (2002); Stanley et al. (2009) are also not motivated by symmetry considerations.

Our method learns to exploit symmetries that are shared by a collection of tasks, a form of meta-learning Thrun and Pratt (2012); Schmidhuber (1987); Bengio et al. (1992); Hochreiter et al. (2001). We extend gradient based meta-learning Finn et al. (2017); Li et al. (2017); Antoniou et al. (2018) to separately learn parameter sharing patterns (which enforce equivariance) and actual parameter values. Separately representing network weights in terms of a sharing pattern and parameter values is a form of reparameterization. Prior work has used weight reparameterization in order to “warp” the loss surface Lee and Choi (2018); Flennerhag et al. (2019) and to learn good latent spaces Rusu et al. (2018) for optimization, rather than to encode equivariance. HyperNetworks Ha et al. (2016) generate network layer weights using a separate smaller network, which can be viewed as a nonlinear reparameterization, albeit not one that encourages learning equivariances. Modular meta-learning Alet et al. (2018) is a related technique that aims to achieve combinatorial generalization on new tasks by stacking meta-learned “modules,” each of which is a neural network. This can be seen as parameter sharing by re-using and combining these modules, rather than by layerwise reparameterization as in our work.

3 Preliminaries

In Sec. 3.1, we review gradient based meta-learning, which underlies our algorithm. Sections 3.2 and 3.3 build up a formal definition of equivariance and group convolution Cohen and Welling (2016), a generalization of standard convolution which defines equivariant operations for other groups such as rotation and reflection. These concepts are important for a theoretical understanding of our work as a method for learning group convolutions in Sec. 4.2.

3.1 Gradient Based Meta-Learning

Our method is a gradient-based meta-learning algorithm that extends MAML Finn et al. (2017), which we briefly review here. Suppose we have some task distribution , where each task is split into training and validation datasets . For a model with parameters , loss , and learning rate , the “inner loop” updates using the task’s training data, shown here with one gradient descent step:


In the “outer loop,” MAML meta-learns a good initialization using the loss of the new parameters on the task’s validation data:


Although MAML focuses on meta-learning the inner loop initialization , one can extend this idea to meta-learning other things such as the inner learning rate . In our method, we meta-learn a parameter sharing pattern at each layer that maximizes performance across the task distribution.

3.2 Groups and Group Actions

Symmetry and equivariance is usually studied in the context of groups and their actions on sets; refer to Dummit and Foote (2004) for more comprehensive coverage. A group is a set closed under some associative binary operation, where there is an identity element and each element has an inverse. For example, consider the group

: we can add any two vectors of

to obtain another, each vector has an additive inverse, and the zero vector is the identity.

A group can act on a set through some action which maps each to some transformation on . must be a homomorphism, i.e. for all , and is the set of automorphisms on (bijective homomorphisms from to itself). As shorthand we often write for any . Any group can act on itself by letting : for , we define the action for any .

To define group convolution it is useful to consider a layer’s inputs and outputs as functions on some underlying space . For example, a vector in is a function mapping indices to real numbers. We denote the vector space of all real-valued functions on by . For group , we can define how acts on by a representation which maps each

to an invertible linear transformation on

. If already acts on , the “natural” representation of the action is . To provide a concrete example, consider images as functions mapping pixel locations to intensities, and let act on by translation as defined above. Then for any . Hence this representation of on images translates images by translating their underlying space (the 2-D plane).

3.3 Equivariance and Convolution

A function (like a neural network layer) is equivariant to some transformation if transforming the function’s input is the same as transforming its output. To be more precise, we must define what those transformations of the input and output are. Consider a neural network layer as a function on functions for two underlying spaces . Assume we have some group and two representations , where defines how transforms the input, while defines how transforms the output. We define -equivariance with respect to these transformations:


If we choose we get , showing that invariance is a type of equivariance.

Figure 1: Convolution as translating filters. Left: Standard 1-D convolution slides a filter along the length of input . This operation is translation equivariant: translating will translate . Right: Standard convolution is equivalent to a fully connected layer with a parameter sharing pattern: each row contains translated copies of the filter. Other equivariant layers will have their own sharing patterns.

Deep networks contain many layers, but fortunately function composition preserves equivariance. So if we achieve equivariance in each individual layer, the whole network will be equivariant. Pointwise nonlinearities such as ReLU and sigmoid are already equivariant to any permutation of the input and output indices, which includes translation, reflection, and rotation. Hence we are primarily focused on enforcing equivariance in the

linear layers.

Prior work Kondor and Trivedi (2018) has shown that a linear layer is equivariant to the action of some group if and only if it is a group convolution, which generalizes standard convolutions to arbitrary groups. For a specific , we call the corresponding group convolution “-convolution” to distinguish it from standard convolution. Intuitively, -convolution transforms a filter according to each , then computes a dot product between the transformed filter and the input. In standard convolution, the filter transformations correspond to translation (Fig. 1). More formally, assume is a finite set. -equivariant layers convolve an input with a filter :


In this work, we present a method that represents and learns parameter sharing patterns for existing layers, such as fully connected layers. These sharing patterns can force the layer to implement various group convolutions, and hence equivariant layers.

4 Encoding and Learning Equivariance

In order to learn equivariances automatically, our method introduces a flexible representation that can encode possible equivariances, and an algorithm for learning which equivariances to encode. Here we describe this method, which we call Meta-learning Symmetries by Reparameterization (MSR).

4.1 Learnable Parameter Sharing

As Fig. 1 shows, a fully connected layer can implement standard convolution if its weight matrix is constrained with a particular sharing pattern, where each row contains a translated copy of the same underlying filter parameters. This idea generalizes to equivariant layers for other transformations like rotation and reflection, but the sharing pattern depends on the transformation. Since we do not know the sharing pattern a priori, we “reparameterize” fully connected weight matrices to represent them in a general and flexible fashion. A fully connected layer has weight matrix :


We can optionally incorporate biases by appending a dimension with value “1” to the input . We factorize as the product of a “symmetry matrix” and a vector of “filter parameters”:


For fully connected layers, we reshape the vector into a weight matrix . Intuitively, encodes the pattern by which the weights will “share” the filter parameters . Crucially, we can now separate the problem of learning the sharing pattern (learning ) from the problem of learning the filter parameters . In Sec. 4.3, we discuss how to learn from data.

The symmetry matrix for each layer has entries, which can become too expensive in larger layers. Kronecker factorization is a common approach for approximating a very large matrix with smaller ones Martens and Grosse (2015); Park and Oliva (2019). In Appendix A we describe how we apply Kronecker approximation to Eq. 6, and analyze memory and computation efficiency.

In practice, there are certain equivariances that would be expensive to meta-learn, but that we know to be useful: for example, standard 2D convolutions for image data. However, there may be still other symmetries of the data (i.e., rotation, scaling, reflection, etc.) that we still wish to learn automatically. This suggests a “hybrid” approach, where we bake-in certain equivariances we know to be useful, and learn the others. Indeed, we can directly reparameterize a standard convolution layer by reshaping into a convolution filter bank rather than a weight matrix. By doing so we bake in translational equivariance, but we can still learn things like rotation equivariance from data.

4.2 Parameter sharing and group convolution

Here we show that by properly choosing the symmetry matrix of Eq. 6, we can force the layer to implement arbitrary group convolutions (Eq. 4) by filter . Recall that group convolutions generalize standard convolution to define operations that are equivariant to other groups, such as rotation and permutation. Hence by choosing properly we can enforce arbitrary equivariances, which will be preserved regardless of the value of .

For simplicity, we’ll work with an input of the form , although the result easily generalizes to multi-channel inputs . We assume that has finite support on and can therefore can be represented as a vector , where . In practice, this is always the case: for example, a discretized image is only nonzero at a finite number of pixel locations. Then we can formalize our claim:

Proposition 1

Suppose is a finite group . There exists a such that for any , the layer with weights implements -convolution on input . Moreover, with this fixed choice of , any -convolution can be represented by a weight matrix for some

We present a proof in Appendix B. Intuitively, can store the symmetry transformations for each , thus capturing how the filters should transform during -convolution. For example, Fig. 2 shows how to choose to implement convolution on the permutation group .

Subject to having a correct , is precisely the convolution filter in a -convolution. This will motivate the notion of separately learning the convolution filter and the symmetry structure in the inner and outer loops of a meta-learning process, respectively.

Figure 2: We reparameterize the weights of each layer in terms of a symmetry matrix which can enforce equivariant sharing patterns of the filter parameters . Here we show a that enforces permutation equivariance. More technically, the layer implements group convolution on the permutation group : ’s block submatrices define the action of each permutation on filter . Note that need not be binary in general.

4.3 Meta-learning equivariances

Figure 3: For each task, the inner loop updates the filter parameters to the task using the inner loop loss. Note that the symmetry matrix does not change in the inner loop, and is only updated by the outer loop.
input : : Meta-training tasks
input : : Randomly initialized symmetry matrices and filters.
input : : Inner and outer loop step sizes.
while not done do
        sample minibatch ;
        forall  do
                // task data
                // inner step
               /* outer gradient */
       /* outer step */
Algorithm 1 MSR: Meta-Training

Meta-learning generally applies when we want to learn and exploit some shared structure in a distribution of tasks . In this case, we assume the task distribution has some common underlying symmetry: i.e., models trained for each task should satisfy some set of shared equivariances. We extend gradient based meta-learning to automatically learn those equivariances.

Suppose we have a -layer network, we first collect each layer’s symmetry matrix and filter parameters:


Since we aim to learn equivariances that are shared across ,the symmetry matrices should not change with the task. Hence, for any the inner loop fixes and only updates using the task training data:



is simply the supervised learning loss, and

is the inner loop step size. During meta-training, the outer loop then computes the loss on the task’s validation data using , and update :


We illustrate the inner and outer loop updates in Fig. 3. Note that in addition to meta-learning the symmetry matrices, we can also still meta-learn the filter initialization as in prior work. In practice we also take outer updates averaged over mini-batches of tasks, as we describe in Alg. 1.

After meta-training is complete, we freeze the symmetry matrices . On a new test task , we use the inner loop (Eq. 8) to update only the filter parameters . The frozen enforces the meta-learned equivariance-inducing parameter sharing in each layer. This sharing improves generalization by reducing the number of task-specific inner loop parameters. For example, the sharing pattern of standard convolution guarantees that the weight matrix is constant along any diagonal, reducing the number of per-task parameters (see Fig. 1).

5 Can we recover convolutional structure?

We now introduce a series of synthetic meta-learning problems, where each problem contains regression tasks that are guaranteed to have some symmetries, such as translation, rotation, or reflection. We combine meta-learning methods with general architectures not designed with these symmetries in mind to see whether each method can automatically meta-learn these equivariances.

Synthetic Problems MSE (lower is better) Method MAML-FC MAML-LC MAML-Conv MSR-FC (Ours) Table 1: Meta-test mean squared error (MSE) of MSR and MAML using different models on synthetic data with (partial) translation symmetry. MSR with a fully connected model (MSR-FC) is comparable to MAML with a convolution model (MAML-Conv) on translation equivariant () data. On higher rank data (less symmetry), MSR outperforms all other approaches.
Figure 4: After observing translation equivariant data, MSR enforces convolutional parameter sharing on the weight matrix. An example weight matrix is shown above.

5.1 Learning (partial) translation symmetry

Our first batch of synthetic problems contains tasks with translational symmetry: we generate regression data by feeding random input vectors to a 1-D locally connected (LC) layer with filter size to generate output vectors. Each task corresponds to the values of the LC filter, and the meta-learner must minimize mean squared error (MSE) after observing a single input-output pair. For each problem we constrain the LC filter weights with a rank factorization Elsayed et al. (2020), implementing a form of partial translation symmetry. In the extreme case where rank , the LC layer is equivalent to convolution (ignoring the biases) and thus generates exactly translation equivariant task data. We apply both MSR and MAML to this problem using a single layer fully connected model (MSR-FC and MAML-FC, respectively), so these models have no translational equivariance built in and must meta-learn it to solve the tasks efficiently. For comparison, we also train convolutional and locally connected models with MAML (MAML-Conv and MAML-LC, respectively). Since MAML-Conv’s architecture builds in translation equivariance, we expect it to at least perform well on the rank problem. We train each method to convergence on the meta-training tasks of each problem, then evaluate the meta-test MSE. Appendix D.1 further explains the experimental setup.

Table 1 shows how each method performs on each of the synthetic problems, listed by column denoting the rank of the LC filter generating task data. On completely translation equivariant data (), MSR-FC performs comparably to MAML-Conv despite not having built in symmetry assumptions. MSR-FC actually meta-learns symmetry matrices that enforce convolutional sharing structure on the weights (Fig. 4), essentially “learning convolution” from translation-equivariant data. In Appendix C we inspect the meta-learned symmetry matrix , which we find implements convolution using filter just as Sec. 4.2 predicted. Meanwhile, MAML-FC and MAML-LC perform significantly worse as they are unable to meta-learn this structure.

On partially symmetric data (rank or ), MSR-FC outperforms all other methods due to its ability to flexibly meta-learn even partial symmetries. MAML-Conv performs worse in these settings since the convolution assumption is overly restrictive, while MAML-FC and MAML-LC are not able to meta-learn much structure at all.

5.2 Learning equivariance to rotations and flips

Rotation/Flip Equivariance MSE
Method Rot Rot+Flip
MSR-Conv (Ours) .004 .001
MAML-Conv .504 .507
Table 2: MSR learns rotation and flip equivariant parameter sharing on top of a standard convolution model, and thus achieves much better generalization error on meta-test tasks compared to MAML on rotation and flip equivariant data.

We also created synthetic problems with 2-D synthetic image inputs and outputs, in order to study rotation and flip equivariance. We generate task data by passing randomly generated inputs through a single layer E(2)-equivariant steerable CNN Weiler and Cesa (2019) configured to be equivariant to combinations of translations, discrete rotations by increments of 45°, and reflections. Hence our synthetic task data contains rotation and reflection in addition to translation symmetry. Each task corresponds to different values of the data-generating network’s weights. We apply MSR and MAML to a single standard convolution layer. By reparameterizing a convolution layer, we have already guaranteed translational equivariance, but each method must still meta-learn rotation and reflection (flip) equivariance from the data. Table 2 shows that MSR easily learns rotation and rotation+reflection equivariance on top of the convolutional model’s built in translational equivariance.

6 Can we learn invariances from augmented data?

input : : Meta-training tasks
input : Meta-Train: Any meta-learner
input : Augment: Data augmenter
forall  do
        // task data split
Algorithm 2 Augmentation Meta-Training

Practitioners commonly use data augmentation to train their models to have certain invariances. Since invariance is a special case of equivariance, we can also view data augmentation as a way of learning equivariant models. The downside is that we need augmented data for each task. While augmentation is often possible during meta-training, there are many situations where its impractical at meta-test time. For example, in robotics we may meta-train a robot in simulation and then deploy (meta-test) in the real world, a kind of sim2real transfer strategy Song et al. (2020). During meta-training we can augment data using the simulated environment, but we cannot do the same at meta-test time in the real world.

Can we instead use MSR to learn equivariances from data augmentation at training time, and encode those learned equivariances into the network itself? This way, the network would preserve learned equivariances on new meta-test tasks without needing any additional data augmentation.

Alg. 2 describes our approach for meta-learning invariances from data augmentation, which wraps around any meta-learning algorithm using generic data augmentation procedures. Recall that each task is split into training and validation data . We use the data augmentation procedure to only modify the validation data, producing a new validation dataset for each task. We re-assemble each modified task using the original training data and modified validation data . For each task, the meta-learner observes unaugmented training data, but must generalize to perform well on augmented validation data. This forces the model to be invariant to the augmentation transforms without actually seeing any augmented training data.

We apply this augmentation strategy to Omniglot Lake et al. (2015) and MiniImagenet Vinyals et al. (2016) few shot classification to create the Aug-Omniglot and Aug-MiniImagenet benchmarks. Our data augmentation function contains a combination of random rotations, flips, and resizes (rescaling), which we apply only to each task’s validation data as described above. Aside from the augmentation procedure, the benchmarks are identical to prior work Finn et al. (2017)

: for each task, the model must classify images into one of either

or classes (-way) and receives either or examples of each class in the task training data (-shot).

We tried combining Alg. 2 with our MSR method and three other meta-learning algorithms: MAML Finn et al. (2017), ANIL Raghu et al. (2019), and Prototypical Networks (ProtoNets) Snell et al. (2017). While the latter three methods all have the potential to learn equivariant features through Alg. 2, we hypothesize that since MSR enforces learned equivariance through its symmetry matrices it should outperform those feature-metalearning methods. Appendix D.2 describes the experimental setup and methods implementations in more detail.

Table 3 shows each method’s meta-test accuracies on both benchmarks. Across different settings MSR performs either comparably to the best method, or the best. MAML and ANIL perform similarly to each other, and usually worse than MSR, suggesting that learning equivariant or invariant features is not as helpful as learning equivariant layer structures. ProtoNets perform well on the easier Aug-Omniglot benchmark, but evidently struggle with learning a transformation invariant metric space on the harder Aug-MiniImagenet problems. Note that MSR’s reparameterization increases the number of meta-learned parameters at each layer, so MSR models contain more total parameters than corresponding MAML models. The “MAML (Big)” results show MAML performance with very large models containing more total parameters than the corresponding MSR models. The results show that MSR also outperforms these larger MAML models despite having fewer total parameters.

Aug-Omniglot Aug-MiniImagenet
5 way 20 way 5 way
Method 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot
MAML (Big)
MSR (Ours)
Table 3: Meta-test accuracies on Aug-Omniglot and Aug-MiniImagenet few-shot classification, which requires generalizing to augmented validation data from un-augmented training data. MSR performs comparably to or better than other methods under this augmented regime. Results shown with 95% CIs.

7 Discussion and Future Work

We introduce a method for automatically meta-learning equivariances in neural network models, by encoding learned equivariance-inducing parameter sharing patterns in each layer. On new tasks, these sharing patterns reduce the number of task-specific parameters and improve generalization. Our experiments show that this method can improve few-shot generalization on task distributions with shared underlying symmetries. We also introduce a strategy for meta-training invariances into networks using data augmentation, and show that it works well with our method. By encoding equivariances into the network as a parameter sharing pattern, our method has the benefit of preserving learned equivariances on new tasks so it can learn more efficiently.

Machine learning thus far has benefited from exploiting human knowledge of problem symmetries, and we believe this work presents a step towards learning and exploiting symmetries automatically. This work leads to numerous directions for future investigation. In addition to generalization benefits, standard convolution is practical since it exploits the parameter sharing structure to improve computational efficiency, relative to a fully connected layer of the same input/output dimensions. While MSR we can improve computational efficiency by reparameterizing known structured layers (such as standard convolution), it does not exploit learned structure to further optimize its computation. Can we automatically learn or find efficient implementations of these more structured operations? Additionally, our method is best for learning symmetries which are shared across a distribution of tasks. Further research on quickly discovering symmetries which are particular to a single task would make deep learning methods significantly more useful on many difficult real world problems.

We would like to thank Sam Greydanus, Archit Sharma, and Yiding Jiang for reviewing and critiquing earlier drafts of this paper. This work was supported in part by Google. CF is a CIFAR Fellow.


Appendix A Approximation and Tractability

a.1 Fully connected

From Eq. 6 we see that for a layer with output units, input units, and filter parameters the symmetry matrix has entries. This is too expensive for larger layers, so in practice, we need a factorized reparameterization to reduce memory and compute requirements when is larger.

For fully connected layers, we use a Kronecker factorization to scalably reparameterize each layer. First, we assume that the filter parameters can be arranged in a matrix . Then we reparameterize each layer’s weight matrix similar to Eq. 6, but assume the symmetry matrix is the Kronecker product of two smaller matrices:


Since we only store the two Kronecker factors and , we reduce the memory requirements of from to . In our experiments we generally choose so and . Then the actual memory cost of each reparameterized layer (including both and ) is , compared to for a standard fully connected layer. So in the case where , MSR increases memory cost by roughly a constant factor of .

After approximation MSR also increases computation time (forward and backward passes) by roughly a constant factor of compared to MAML. A standard fully connected layer requires a single matrix-matrix multiply in the forward pass (here and are matrices since inputs and outputs are in batches). Applying the Kronecker-vec trick to Eq. 10 gives:


So rather than actually form the (possibly large) symmetry matrix , we can directly construct simply using additional matrix-matrix multiplies . Again assuming and , each matrix in the preceding expression is approximately the same size as .

a.2 2D Convolution

When reparameterizing 2-D convolutions, we need to produce a filter (a rank- tensor ). We assume the filter parameters are stored in a rank tensor , and factorize the symmetry matrix into three separate matrices and . A similar Kronecker product approximation gives:


where represents -mode tensor multiplication Kolda and Bader [2009]. Just as in the fully connected case, this convolution reparameterization is equivalent to a Kronecker factorization of the symmetry matrix .

An analysis of the memory and computation requirements of reparameterized convolution layers proceeds similarly to the above analysis for the fully connected case. As we describe below, in our augmented experiments using convolutional models each MSR outer step takes roughly longer than a MAML outer step.

Appendix B Proof of Proposition 1

We’ll model an input signal as a function on some underlying space . We then consider a finite group of symmetries acting transitively on , over which we desire -equivariance. Many (but not all) of the groups discussed in Weiler and Cesa [2019] are finite groups of this form.

It is proven by Kondor and Trivedi [2018] that a function is equivariant to if and only if it is a -convolution. Following the original paper on group-equivariant CNNs Cohen and Welling [2016], we in fact consider a slight simplification of this notion: a finite “ cross-correlation” of with a filter . This can be defined as:

Figure 5: The theoretical convolutional weight symmetry matrix for the group , where is a -radian rotation of a 3x3 image (. Notice that the image is flattened into a length 9 vector. The matrix describes the action of a radian rotation on this image.

In order for fully connected layer’s weight matrix to act on function , we must first assume that has finite support —i.e. is only non-zero at these points within . This means that can be represented as a “dual” vector given by , on which can act.222This is using the natural linear algebraic dual of the free vector space on

We aim to show a certain value of allows arbitrary cross-correlations—and only cross-correlations—to be represented by fully connected layers with weight matrices of the form


where is any arbitrary vector of appropriate dimension. The reshape specifically gives , which transforms the vector .

With this in mind, we first use that the action of the group can be represented as a matrix transformation on this vector space, using the matrix representation :


where notably

We consider , and . Since , we can also treat as a the “dual” vector of a function with support , described by . We can interpret as a convolutional filter, just like in Eq. 14. then acts on just as it acts on , namely:


Now, we define by stacking the matrix representations of :


which implies the following value of


This then grants that the output of the fully connected layer with weights is:


Using that has finite support , and that , we have that:


Lastly, we can interpret as a function mapping each to its component:


which is precisely the cross-correlation as described in Eq. 14 with filter . This implies that must be equivariant with respect to . Moreover, all such -equivariant functions are cross-correlations parameterized by , so with fixed as in Eq.-18, we have that can represent all -equivariant functions.

This means that if is chosen to be have the same dimension as the input, and the weight symmetry matrix is sufficiently large, any equivariance to a finite group can be meta-learned. Moreover, in this case the symmetry matrix has a very natural and interpretable structure, containing a representation of the group in block submatrices. Lastly, notice that corresponds (dually) to the convolutional filter, justifying the notion that we learn the convolutional filter in the inner loop, and the group action in the outer group.

In the above proof, we’ve used the original definition of group convolution Cohen and Welling [2016] for the sake of simplicity. It is useful to note that a slight generalization of the proof applies for more general equivariance between representations, as defined in equation (3)—(i.e. the case when is an arbitrary linear transformation, and not necessarily of the form This is subject to a unitarity condition on the group representation Worrall and Welling [2019].

Without any modification to the method, arbitrary linear approximations to group convolution can be learnt when the group is not a subgroup of the symmetric group—i.e. when

does not consist purely of permutations of indices. For example, non axis-aligned rotations can be easily approximated through both bilinear and bicubic interpolation, whereby the value of a pixel

after rotation is a linear interpolation of the 4 or 16 pixels nearest to the “true” value of this pixel before rotation . This allows us to practically learn groups like , which is generated by 45 degree rotations.

Appendix C Visualizing the meta-learned symmetry matrix

Figure 6: The submatrices of the meta-learned symmetry matrix of MSR-FC on the translation equivariant problem (Sec. 5.1). Intensity corresponds to each entry’s absolute value. We see that the symmetry matrix has been meta-learned to implement standard convolution: each translates the size filter by spaces. Note that in actuality the submatrices are stacked on top of each other in as in Eq. 18, but we display them side-by-side for visualization.

Fig. 6 visualizes the actual symmetry matrix that MSR-FC meta-learns from translation equivariant data. Each column is one of the submatrices corresponding to the action of the discrete translation group element on the filter . In other words, MSR automatically meta-learned to contain these submatrices such that each translations the filter by spaces, effectively meta-learning standard convolution! In the actual symmetry matrix the submatrices are stacked on top of each other as in Eq. 18, but we display each submatrix side-by-side for easy visualization. The figure is also cropped for space: there are a total of submatrices but we show only the first , and each submatrix is cropped from to .

Appendix D Experimental details

Throughout this work we implemented all gradient based meta-learning algorithms in PyTorch using the Higher Grefenstette et al. [2019] library.

d.1 Synthetic Problems

In the synthetic problems we generated regression data using either a single locally connected layer (Sec. 5.1) or a single E(2)-steerable layer (Sec. 5.2

). Each task corresponds to different weights of the data generating network, whose entries we sample independently from a standard normal distribution. For rank

locally connected filters we sampled width- filters and then set the filter value at each spatial location to be a random linear combination of those filters. We generated tasks for each synthetic problem and randomly split them into meta-train and meta-test tasks.

For each particular task, we generated data points by randomly sampling the input vector or “image” entries from a standard normal distribution, passing the input vector into the data generating network, and saving the input and output as a pair. We then randomly split the task data into task training data (1 data point) and task validation data (19 datapoints). Hence the model has to solve each task after viewing a single task training datapoint.

During meta-training we trained each method for steps, which was sufficient for the training loss to converge for every method in every problem. We used the Adam Kingma and Ba [2014] optimizer in the outer loop with learning rate . In the inner loop we used SGD with meta-learned per-layer learning rates, initialized to . We used a single inner loop step for all experiments, and a task batch size of during meta-training. At meta-test time we evaluated average performance and error bars using random held-out tasks.

We ran all experiments on a single machine with a single NVidia RTX 2080Ti GPU. Our MSR-FC experiments took about (outer loop) steps per second, while our MSR-Conv experiments took about (outer loop) steps per second.

d.2 Augmentation Experiments

To create Aug-Omniglot and Aug-MiniImagenet, we extended the Omniglot and MiniImagenet benchmarks from TorchMeta Deleu et al. [2019]. Each task in these benchmarks is split into support (train) and query (validation) datasets. For the augmented benchmarks we applied data augmentation to only the query dataset of each task, which consisted of randomly resized crops, reflections, and rotations by up to . Using the torchvision library, the augmentation function is:

# Data augmentation applied to ONLY the query set.
size = 28  # Omniglot image size. 84 for MiniImagenet.
augment_fn = Compose(
    RandomResizedCrop(28, scale=(0.8, 1.0)),
    RandomRotation(30, resample=Image.BILINEAR),

For all experiments except MiniImagenet 5-shot, MAML and MSR used exactly the same convolutional architecture (same number of layers, number of channels, filter sizes, etc.) as prior work on Omniglot and MiniImagenet Vinyals et al. [2016]. For MSR we reparameterize each layer’s weight matrix or convolutional filter. For MiniImagenet 5-shot, we found that increasing architecture size helped both methods: for the first 3 convolution layers, we increased the number of output channels to and increased the kernel size to . We then inserted a convolution layer with output channels right before the linear output layer. For fair comparison we also increased the ProtoNet architecture size on MiniImagenet 5-shot, using output channels at each layer. We found that increasing the kernel size to at each layer in the ProtoNet worsened performance, so we left it at .

For “MAML (Big)” experiments we increased the architecture size of the MAML model to exceed the number of meta-parameters (symmetry matrices + filter parameters) in the corresponding MSR model. For MiniImagenet 5-Shot we increased the number of output channels at each of the convolution layers to , then inserted an additional linear layer with output units before the final linear layer. For MiniImagenet 1-Shot we increased the number of output channels at each of the convolution layers to , then inserted an additional linear layer with output units before the final linear layer. For the Omniglot experiments we increased the number of output channels at each of the convolution layers to .

For all experiments and gradient based methods we trained for (outer) steps using the Adam optimizer with learning rate for MiniImagenet 5-shot and for all other experiments. In the inner loop we used SGD with meta-learned per-layer learning rates initialized to for Omniglot and for MiniImagenet. We meta-trained using a single inner loop step in all experiments, and used inner loop steps at meta-test time. Although MAML originally meta-trained with inner loop steps on MiniImagenet, we found that this destabilized meta-training on our augmented version. We hypothesize that this is due to the discrepancy between support and query data in our augmented problems. During meta-training we used a task batch size of for Omniglot and for MiniImagenet. At meta-test time we evaluated average performance and error bars using held-out meta-test tasks.

We ran all experiments on a machine with a single NVidia Titan RTX GPU. For our Aug-Omniglot, we ran two experiments at simultaneously on the same machine, which likely slowed each invididual experiment down. Our MSR method took about steps per second, whereas the MAML baseline took about steps per second. For Aug-Miniimagenet we ran one experiment per machine. MSR took steps per second, while MAML took steps per second on these experiments.