Steerable CNNs

12/27/2016 ∙ by Taco S Cohen, et al. ∙ University of Amsterdam 0

It has long been recognized that the invariance and equivariance properties of a representation are critically important for success in many vision tasks. In this paper we present Steerable Convolutional Neural Networks, an efficient and flexible class of equivariant convolutional networks. We show that steerable CNNs achieve state of the art results on the CIFAR image classification benchmark. The mathematical theory of steerable representations reveals a type system in which any steerable representation is a composition of elementary feature types, each one associated with a particular kind of symmetry. We show how the parameter cost of a steerable filter bank depends on the types of the input and output features, and show how to use this knowledge to construct CNNs that utilize parameters effectively.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

00footnotetext: An earlier version of this paper was published on openreview.net on November 4th, 2016.

Much of the recent progress in computer vision can be attributed to the availability of large labelled datasets and deep neural networks capable of absorbing large amounts of information. While many practical problems can now be solved, the requirement for big (labelled) data is a fundamentally unsatisfactory state of affairs. Human beings are able to learn new concepts with very few labels, and mimicking this ability is an important challenge for artificial intelligence research. From an applied perspective, improving the statistical efficiency of deep learning is vital because in many domains (e.g. medical image analysis), acquiring large amounts of labelled data is costly.

To improve the statistical efficiency of machine learning methods, many have sought to learn invariant representations. In deep learning, however, intermediate layers should not be fully invariant, because the relative pose of local features must be preserved for further layers

(Cohen & Welling, 2016; Hinton et al., 2011). Thus, one is led to the idea of equivariance: a network is equivariant if the representations it produces transform in a predictable linear manner under transformations of the input. In other words, equivariant networks produce representations that are steerable. Steerability makes it possible to apply filters not just in every position (as in a standard convolution layer), but in every pose, thus allowing for increased parameter sharing.

Previous work has shown that equivariant CNNs yield state of the art results on classification tasks (Cohen & Welling, 2016; Dieleman et al., 2016), even though they only enforce equivariance to small groups of transformations like rotations by multiples of degrees. Learning representations that are equivariant to larger groups is likely to result in further gains, but the computational cost of current methods scales with the size of the group, making this impractical. In this paper we present a general theory of steerable representations that covers all forms of linear steerability in convolutional networks, thus increasing the flexibility of equivariant CNNs and allowing us to decouple the computational cost from the size of the group, paving the way for future scaling.

We show that any steerable representation is a composition of elementary feature types. Each elementary feature can be steered independently of the others, and captures a distinct characteristic of the input that has an invariant or “objective” meaning. This doctrine of “observer-independent quantities” was put forward by (Weyl, 1939, ch. 1.4) and is used throughout physics. It has been applied to vision and representation learning by Kanatani (1990); Cohen (2013).

The mentioned type system puts constraints on the network weights and architecture. Specifically, since an equivariant filter bank is required to map given input feature types to given output feature types, the number of parameters required by such a filter bank is reduced. Furthermore, by the same logic that tells us not to add meters to seconds, steerability considerations prevent us from adding features of different types (e.g. for residual learning (He et al., 2015)).

The rest of this paper is organized as follows. The theory of steerable CNNs is introduced in Section 2. Related work is discussed in Section 3, which is followed by classification experiments (4) and a discussion and conclusion in Section 5.

2 Steerable CNNs

2.1 Feature maps and fibers

Consider a 2D signal with channels. The signal may be an input to the network or a feature representation computed by a CNN. Since signals can be added and multiplied by scalars, the set of signals of this signature forms a linear space . Each layer of the network has its own feature space , but we will often suppress the layer index to reduce clutter.

It is customary in deep learning to describe as a stack of feature maps (for ). In this paper we also consider another decomposition of into fibers. The fiber at position in the “base space” is the

-dimensional vector space that spans all channels at a given position. Thus,

is comprised of feature vectors that live in the fibers (see Figure 1(a)).

(a) The feature space is decomposed into a stack of feature maps (left) and a bundle of fibers (right).
(b) An image is rotated by using .
Figure 1: Feature maps, fibers, and the transformation law of .

Given some group of transformations that acts on points in , we can transform signals :

(1)

This says that the pixel at gets moved to by the transformation . We note that is a linear operator.

An important property of is that . Here, means composition of transformations in , while denotes matrix multiplication. A vector space such as equipped with a set of linear operators satisfying this condition is known as a group representation (or just representation, for short). A lot is known about group representations (Serre, 1977), and we will make extensive use of the theory, explaining the relevant concepts as needed.

2.2 Steerable representations

Let be a feature space with a group representation and a convolutional network. The feature space is said to be (linearly) steerable with respect to , if for all transformations , the features and

are related by a linear transformation

that does not depend on . So allows us to “steer” the features in without referring to the input in from which they were computed.

Combining the definition of steerability (i.e. ) with the fact that is a group representation, we find that must also be a group representation:

(2)

That is, (at least in the span of the image of ). Figure 2 gives an illustration.

Figure 2: Diagram showing the structural consistency that follows from equivariance of the network and the group representation structure of . The result of following any path in this diagram depends only on the beginning and endpoint but is independent of the path itself, c.f. eq. 2

Although the theory can be developed in a more general setting, for simplicity, we will only consider steerability with respect to discrete groups of transformations. Our running example will be the group which consists of translations, rotations by degrees around any point, and reflections. We further restrict our attention to groups that are constructed111as a semi-direct product from the group of translations and a group of transformations that fixes the origin . For , we have , the -element group of reflections and rotations about the origin.

Using this division, we can first construct a filter bank that generates -steerable fibers, and then show that convolution with such a filter bank produces a feature space that is steerable with respect to the whole group .

2.3 Equivariant filter banks

A filter bank can be described as an array of dimension , where denote the number of input / output channels and is the kernel size. For our purposes it is useful to think of a filter bank as a linear map that takes as input a signal and produces a -dimensional feature vector. The filter bank only looks at an patch in , so the matrix representing has shape . To correlate a signal using , one would simply apply to translated copies of , producing the output signal one fiber at a time.

We assume (by induction) that we have a representation that allows us to steer . In order to make the output of the convolution steerable, we need the filter bank to be -equivariant:

(3)
Figure 3: A filter bank that is -equivariant. In this example, represents the -degree rotation by a permutation matrix that cyclicly shifts the channels.

for some representation of that acts on the output fibers (see Figure 3). Note that we only require equivariance with respect to (which excludes translations) and not , because translations can move patterns into and out of the receptive field of a fiber, making full translation equivariance impossible.

The space of maps satisfying the equivariance constraint is denoted , because an equivariant map is a “homomorphism of group representations”, meaning it respects the structure of the representations. Equivariant maps are also sometimes called intertwiners (Serre, 1977).

Since the equivariance constraint (eq. 3) is linear in , the space of admissible filter banks is a vector space: any linear combination of maps is again an intertwiner. Hence, given and , we can compute a basis for by solving a linear system.

Computation of the intertwiner basis is done offline, before training. Once we have such a basis for , we can express any equivariant filter bank as a linear combination using parameters . As shown in Section 2.7, this can be done efficiently even in high dimensions.

2.4 Induction

We have shown how to parameterize filter banks that intertwine and , making the output fibers -steerable by if the input space is -steerable by . In this section we show how -steerability of fibers leads to -steerability of the whole feature space . This happens through a natural and important construction known as the induced representation (Mackey, 1952, 1953, 1968; Serre, 1977; Taylor, 1986; Folland, 1995; Kaniuth & Taylor, 2013).

As stated, the correlation could be computed by translating before applying :

(4)

Where is interpreted as a translation when given as input to .

We can now calculate the transformation law of the output space. To do so, we apply a translation and transformation to , yielding , and then perform the correlation with . With a some algebra (Appendix A), we find:

(5)

Now if we define as

(6)

then (see Fig. 4). This representation is known as the representation of induced by the representation of , and is denoted .

Figure 4: The representation induced from the permutation representation shown in fig. 3. A single fiber is highlighted. It is transported to a new location, and acted on by .

When parsing eq. 6, it is important to keep in mind that (as indicated by the square brackets) acts on the whole feature space while acts on individual fibers.

If we compare the induced representation (eq. 6) to the representation defined in eq. 1, we see that the difference lies only in the presence of a factor applied to the fibers. This factor describes how the feature channels are mixed by the transformation. The color channels in the input space do not get mixed by geometrical transformations, so we say that is induced from the trivial representation .

Now that we have a -steerable feature space , we can iterate the procedure by computing a basis for the space of intertwiners between (restricted to ) and some of our choosing.

2.5 Feature types and character theory

By now, the reader may be wondering how to choose , or indeed what the space of representations that we can choose from looks like in the first place. We will answer these questions in this section by showing that each representation has a type (encoded as a short list of integers) that corresponds to a certain symmetry or invariance of the feature. We further show how the number of parameters of an equivariant filter bank depends on the types of the representations and that it intertwines. Our discussion will make use of a number of important elementary results from group representation theory which are stated but not proved. The reader wishing to go deeper may consult chapters 1 and 2 of the excellent book by Serre (1977).

Recall that a group representation is a set of invertible linear maps satisfying for all elements . It can be shown that any representation is a direct sum (i.e. block_diag plus change of basis) of a number of “elementary” representations associated with . These building blocks are called irreducible representations (or irreps), because they can themselves not be block-diagonalized. In other words, if are the irreducible representations of , any representation of can be written in block-diagonal form:

(7)

for some basis matrix , and some that index the irreps (each irrep may occur or more times).

Each irreducible representation corresponds to a type of symmetry, as shown in table 1. For example, as can be seen in this table, the representations and represent the -degree rotation as the matrix , so the basis filters for these representations change sign when rotated by . It should be noted that in the higher layers , elementary basis filters can look different because they depend on the representation that is being decomposed.

Irrep Basis in
A1
A2
B1
B2
E
Table 1: The irreducible representations of the roto-reflection group D4. This group is generated by -degree rotations and mirror reflections , and has 5 irreps labelled A1, A2, B1, B2, E. Left: decomposition of (eq. 1) in the space of filters with one channel. This representation turns out to have type , meaning there are three copies of A1, one copy of B1, one copy of B2, and two copies of the 2D irrep E (A2 does not appear). Right: the representation matrices of each irrep, for each element of the group D4. The reader may verify that these are valid representations, and that the characters (traces) are orthogonal.

The fact that all representations can be decomposed into a direct sum of irreducibles implies that each representation has a basis-independent type: which irreducible representations appear in it, and with what multiplicity. For example, the input representation (table 1) has type . This means that, for instance, is block-diagonalized as:

(8)

Where the block matrix contains copies of the irreps , evaluated at (see column in table 1). The change of basis matrix is constructed from the basis filters shown in table 1 (and the same block-diagonalizes for all ).

So the most general way in which we can choose a fiber representation is to choose multiplicities and a basis matrix . In Section 2.6

we will find that there is an important restriction on this freedom, which alleviates the need to choose a basis. The choice of multiplicities is then the only hyperparameter, analogous to the choice of the number of channels in an ordinary CNN. Indeed, the multiplicities determine the number of channels:

.

By choosing the type of , we also determine the type of (restricted to ), but what is it? Explicit formulas exist (Reeder (2014); Serre (1977)) but are rather complicated, so we will present a simple computational procedure that can be used to determine the type of any representation. This procedure relies on the character of the representation to be decomposed. The most important fact about characters is that the characters of irreps are orthogonal:

(9)

Furthermore, since the trace of a direct sum equals the sum of the traces (i.e. ), and every representation is a direct sum of irreps, it follows that we can obtain the multiplicity of irrep in by computing the inner product with the -th character:

(10)

So a simple dot product of characters is all we need to determine the type of a representation.

2.5.1 The parameter cost of equivariant convolution layers

Steerable CNNs use parameters much more efficiently than ordinary CNNs. In this section we show how the number of parameters required by an equivariant layer is determined by the feature types of the input and output space, and how the efficiency of a choice of feature types may be evaluated.

In section 2.3, we found that a filter bank is equivariant if and only if it lies in the vector space called . It follows that the number of parameters for such a filter bank is equal to the dimensionality of this space, . This number is known as the intertwining number of and and plays an important role in the theory of group representations.

As with multiplicities, the intertwining number is easily computed using characters. It can be shown (Reeder, 2014) that the intertwining number equals:

(11)

By linearity and the orthogonality of characters, we find that , for representations of type and , respectively. Thus, as far as the number of parameters of a steerable convolution layer is concerned, the only choice we have to make for is its type – a short list of integers .

The efficiency of a choice of type can be assessed using a quantity we call the parameter utilization:

(12)

The numerator equals : the number of parameters for a non-equivariant filter bank. The denominator equals the parameter cost of an equivariant filter bank with the same filter size and number of input/output channels. Typical values of in effective architectures are around , e.g. for . Such a layer utilizes its parameters times more intensively than an ordinary convolution layer.

2.6 Equivariant nonlinearities & capsules

In the previous section we showed that only the basis-independent types of and play a role in determining the parameter cost of an equivariant filter bank. An equivalent representation will have the same type, and hence the same parameter cost as . However, when it comes to nonlinearities, different bases behave differently.

Just like a convolution layer (eq. 3), a layer of nonlinearities must commute with the group action. An elementwise nonlinearity (or more generally, a fiber-wise nonlinearity ) is admissible for an input representation if there exists an output representation such that applied after equals applied after .

Since commutation with nonlinearities depends on the basis, we need a more granular notion than the feature type. We define a -capsule as a (typically low-dimensional) feature vector that transforms according to a representation (we may also refer to as the capsule). Thus, while a capsule has a type, not all representations of that type are equivalent as capsules. Given a catalogue of capsules (for ) with multiplicities , we can construct a fiber as a stack of capsules that is steerable by a block-diagonal representation with copies of on the diagonal.

Like the capsules of Hinton et al. (2011), our capsules encode the pose of a pattern in the input, and consist of a number of units (dimensions) that do not get mixed with the units of other capsules under transformations. In this sense, a stack of capsules is disentangled (Cohen & Welling, 2014).

We have found a few simple types of capsules and corresponding admissible nonlinearities. It is easy to see that any nonlinearity is admissible for when the latter is realized by permutation matrices: permuting a list of coordinates and then applying a nonlinearity is the same as applying the nonlinearity and then permuting. If is realized by a signed permutation matrix, then introduced by Shang et al. (2016), or any concatenated nonlinearity , will be admissible. Any scale-free concatenated nonlinearity such as CReLU is admissible for a representation realized by monomial matrices (having the same nonzero pattern as a permutation matrix). Finally, we can always make a representation of a finite group orthogonal by a suitable choice of basis, which means that we can use any nonlinearity that acts only on the length of the vector.

For many groups, the irreps can be realized using signed permutation matrices, so we can use irreducible -capsules with concatenated nonlinearities such as CReLU. Another class of capsules, which we call quotient capsules, are naturally realized by permutation matrices, and are thus compatible with any nonlinearity. These are described in Appendix C.

2.7 Computational efficiency

Modern convolutional networks often use on the order of hundreds of channels per layer Zagoruyko & Komodakis (2016). When using filters, a filter bank can have on the order of dimensions. The number of parameters for an equivariant filter bank is about times smaller, but a basis for the space of equivariant filter banks would still be about , which is too large to be practical.

Fortunately, the block-diagonal structure of and induces a block structure in . Suppose and . Then an intertwiner is a matrix of shape , where and . This matrix has the following block structure:

(13)

Each block corresponds to an input-output pair of capsules, and can be parameterized by a linear combination of basis matrices .

In practice, we typically use many copies of the same capsule (say copies of and copies of ). Therefore, many of the blocks can be constructed using the same intertwiner basis. If we order equivalent capsules to be adjacent, the intertwiner consists of “blocks of blocks”. Each superblock has shape , and consists of subblocks of shape .

The computation graph for an equivariant convolution layer is constructed as follows. Given a catalogue of capsules and corresponding post-activation capsules , we compute the induced representations and the bases for in an offline step. The bases are stored as matrices of shape . Then, given a list of input / output multiplicities for the capsules, a parameter matrix of shape is instantiated. The superblocks are obtained by a matrix multiplication plus reshaping to shape . Once all superblocks are filled in, the matrix is reshaped from to and convolved with the input.

2.8 Using steerable CNNs in practice

A full understanding of the theory of steerable CNNs requires some knowledge of group representation theory, but using steerable CNN technology is not much harder than using ordinary CNNs. Instead of choosing a number of channels for a given layer, one chooses a list of multiplicities

for each capsule in a library of capsules provided by the developer. To preserve equivariance, the activation function applied to a capsule must be chosen from a list of admissible nonlinearities for that capsule (which sometimes includes all nonlinearities). Finally, one must respect the type system and only add identical capsules (e.g. in ResNets). These constraints can all be checked automatically.

3 Related Work

Steerable filters were first studied for applications in signal processing and low-level vision (Freeman & Adelson, 1991; Greenspan et al., 1994; Simoncelli & Freeman, 1995). More or less explicit connections between steerability and group representation theory have been observed by Lenz (1989); Koenderink & Van Doorn (1990); Teo (1998); Krajsek & Mester (2007). As we have tried to demonstrate in this paper, representation theory is indeed the natural mathematical framework in which to study steerability.

In machine learning, equivariant kernels were studied by Reisert (2008); Skibbe (2013). In the context of neural networks, various authors have studied equivariant representations. Capsules were introduced in Hinton et al. (2011), and significantly improved by Tieleman (2014). A theoretical account of equivariant representation learning in the brain is given by Anselmi et al. (2014). Group equivariant scattering networks were defined and studied by Mallat (2012) for compact groups, and by Sifre & Mallat (2013); Oyallon & Mallat (2015) for the roto-translation group. Jacobsen et al. (2016) describe a network that uses a fixed set of (possibly steerable) basis filters with learned weights. Lenc & Vedaldi (2015) showed empirically that convolutional networks tend to learn equivariant representations, which suggests that equivariance could be a good inductive bias.

Invariant and equivariant CNNs have been studied by Gens & Domingos (2014); Kanazawa et al. (2014); Dieleman et al. (2015, 2016); Cohen & Welling (2016); Marcos et al. (2016). All of these models, as well as scattering networks, implicitly use the regular representation: feature maps are (often implicitly) conceived of as functions on , and the action of on the space of functions on is known as the regular representation (Serre (1977), Appendix B). This form of equivariance is a special case of that presented in this paper.

The idea of adding a type system to neural networks has been explored by Olah (2015); Balduzzi & Ghifary (2015). We have shown that a type system emerges naturally from the decomposition of a linear representation of a mathematical structure (a group, in our case) associated with the representation learned by a neural network.

4 Experiments

We performed experiments on the CIFAR10 dataset (Krizhevsky, 2009) to determine if steerability is a useful inductive bias, and to determine the relative merits of the various types of capsules. In order to run experiments faster, and to see how steerable CNNs perform in the small-data regime, we used only 2000 training samples for our initial experiments.

As a baseline, we used the competitive wide residual networks (ResNets) architecture (He et al., 2015, 2016; Zagoruyko & Komodakis, 2016). We tuned the capacity of this network for the reduced dataset size and settled on a layer architecture (three residual blocks per stage, with two layers each, for three stages with feature maps of size , and , various widths). We compared the baseline architecture to various kinds of steerable CNN, obtained by replacing the convolution layers by steerable convolution layers. To make sure that differences in performance were not simply due to underfitting or overfitting, we tuned the width (number of channels, ) using a validation set. The rest of the training procedure is identical to Cohen & Welling (2016), and is fixed for all of our experiments.

We first tested steerable CNNs that consist entirely of a single kind of capsule. We found that architectures with only one type do not perform very well (roughly - error, vs. for plain ResNets trained on 2k samples from CIFAR10), except for those that use the regular representation capsule (Appendix C), which outperforms standard CNNs ( error). This is not too surprising, because many capsules are quite restrictive in the spatial patterns they can express. The strong performance of regular capsules is consistent with the results of Cohen & Welling (2016), and can be explained by the fact that the regular representation contains all other (irreducible and quotient) representations as subrepresentations, and can therefore learn arbitrary spatial patterns.

Net Depth Width #Params #Labels Dataset Test error
Ladder 10 96 4k C10ss
steer 14 (280, 112) 4.4M 4k C10
steer 20 (160, 64) 2.2M 4k C10
steer 14 (280, 112) 4.4M 4k C10+
steer 20 (160, 64) 2.2M 4k C10+
ResNet 1001 16 10.2M 50k C10+
Wide 28 160 36.5M 50k C10+
Dense 100 2400 27.2M 50k C10+
steer 26 (280, 112) 9.1M 50k C10+
steer 20 (440, 176) 16.7M 50k C10+
steer 14 (400, 160) 9.1M 50k C10+
ResNet 1001 16 10.2M 50k C100+
Wide 28 160 36.5M 50k C100+
Dense 100 2400 27.2M 50k C100+
steer 20 (280, 112) 6.9M 50k C100+
steer 14 (400, 160) 9.1M 50k C100+
Table 2: Comparison of results of steerable CNNs vs. previous state of the art methods. A plus (+) indicates modest data augmentation (shifts and flips). Width for steerable CNNs is reported as a pair of numbers, one for the input / output layer of a ResNet block, and one for the intermediate layer.

We then created networks that use a mix of the more successful kinds of capsules. After a few preliminary experiments, we settled on a residual network that uses one mix of capsules for the input and output layer of a residual block, and another for the intermediate layer. The first representation consists of quotient capsules: regular, qm, qmr2, qmr3 (see Appendix C) followed by ReLUs. The second consists of irreducible capsules: A1, A2, B1, B2, E(2x) followed by CReLUs. On CIFAR10 with 2k labels, this architecture works better than standard ResNets and regular capsules at error.

When tested on CIFAR10 with 4k labels (table 2), the method comes close to the state of the art in semi-supervised methods, that use additional unlabelled data (Rasmus et al., 2016)

, and better than transfer learning approaches such as DCGAN which achieves

error (Radford et al., 2015). When tested on the full CIFAR10 and CIFAR100 dataset, the steerable CNN substantially outperforms the ResNet (He et al., 2016) baseline and achieves state of the art results (improving over wide and dense nets (Zagoruyko & Komodakis, 2016; Huang et al., 2016)).

5 Conclusion & Future Work

We have presented a theoretical framework for understanding steerable representations in convolutional networks, and have shown that steerability is a useful inductive bias that can improve model accuracy, particularly when little data is available. Our experiments show that a simple steerable architecture achieves state of the art results on CIFAR10 and CIFAR100, outperforming recent architectures such as wide and dense residual networks.

The mathematical connection between representation learning and representation theory that we have established improves our understanding of the inner workings of (equivariant) convolutional networks, revealing the humble CNN as an elegant geometrical computation engine. We expect that this new tool (representation theory), developed over more than a century by mathematicians and physicists, will greatly benefit future investigations in this area.

For concreteness, we have used the group of flips and rotations by multiples of degrees as a running example throughout this paper. This group already has some nontrivial characteristics (such as non-commutativity), but it is still small and discrete. The theory of steerable CNNs, however, readily extends to the continuous setting. Evaluating steerable CNNs for large, continuous and high-dimensional groups is an important piece of future work.

Another direction for future work is learning

the feature types, which may be easier in the continuous setting because (for non-compact groups) the irreps live in a continuous space where optimization may be possible. Beyond classification, steerable CNNs are likely to be useful in geometrical tasks such as action recognition, pose and motion estimation, and continuous control tasks.

Acknowledgments

We kindly thank Kenta Oono, Shuang Wu, Thomas Kipf and the anonymous reviewers for their feedback and suggestions. This research was supported by Facebook, Google and NWO (grant number NAI.14.108).

References

  • Anselmi et al. (2014) F. Anselmi, J. Z. Leibo, L. Rosasco, J. Mutch, A. Tacchetti, and T. Poggio. Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning? Technical Report 001, MIT Center for Brains, Minds and Machines, 2014.
  • Balduzzi & Ghifary (2015) David Balduzzi and Muhammad Ghifary.

    Strongly-Typed Recurrent Neural Networks.

    48, 2015.
  • Cohen (2013) T. Cohen. Learning Transformation Groups and their Invariants. PhD thesis, University of Amsterdam, 2013.
  • Cohen & Welling (2014) T. Cohen and M. Welling. Learning the Irreducible Representations of Commutative Lie Groups. In Proceedings of the 31st International Conference on Machine Learning (ICML), volume 31, pp. 1755–1763, 2014.
  • Cohen & Welling (2016) Taco S. Cohen and Max Welling. Group equivariant convolutional networks. In Proceedings of The 33rd International Conference on Machine Learning (ICML), volume 48, pp. 2990–2999, 2016.
  • Dieleman et al. (2015) S. Dieleman, K. W. Willett, and J. Dambre. Rotation-invariant convolutional neural networks for galaxy morphology prediction. Monthly Notices of the Royal Astronomical Society, 450(2), 2015.
  • Dieleman et al. (2016) S. Dieleman, J. De Fauw, and K. Kavukcuoglu. Exploiting Cyclic Symmetry in Convolutional Neural Networks. In International Conference on Machine Learning (ICML), 2016.
  • Folland (1995) G. B. Folland. A Course in Abstract Harmonic Analysis. CRC Press, 1995.
  • Freeman & Adelson (1991) W T Freeman and E H Adelson. The design and use of steerable filters. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 13(9):891–906, sep 1991. ISSN 0162-8828. doi: 10.1109/34.93808. URL http://dx.doi.org/10.1109/34.93808.
  • Gens & Domingos (2014) R. Gens and P. Domingos. Deep Symmetry Networks. In Advances in Neural Information Processing Systems (NIPS), 2014.
  • Greenspan et al. (1994) H Greenspan, S Belongie, R Goodman, and P Perona. Overcomplete Steerable Pyramid Filters and Rotation Invariance.

    Proceedings of the Computer Vision and Pattern Recognition (CVPR)

    , 1994.
  • He et al. (2015) K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. arXiv:1512.03385, 2015.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity Mappings in Deep Residual Networks. arXiv:1603.05027, 2016.
  • Hinton et al. (2011) G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. ICANN-11: International Conference on Artificial Neural Networks, Helsinki, 2011.
  • Huang et al. (2016) Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely Connected Convolutional Networks. pp. 1–12, 2016. URL http://arxiv.org/abs/1608.06993.
  • Jacobsen et al. (2016) Jorn-Henrik Jacobsen, Jan van Gemert, Zhongyou Lou, and Arnold W.M. Smeulders. Structured Receptive Fields in CNNs. 2016.
  • Kanatani (1990) Kenichi Kanatani. Group-Theoretical Methods in Image Understanding. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1990. ISBN 9783642852152.
  • Kanazawa et al. (2014) A Kanazawa, A Sharma, and D Jacobs. Locally Scale-invariant Convolutional Neural Network. Deep Learning and Representation Learning Workshop: NIPS, pp. 1–11, 2014.
  • Kaniuth & Taylor (2013) Eberhard Kaniuth and Keith F. Taylor. Induced Representations of Locally Compact Groups. 2013. ISBN 9780521762267.
  • Koenderink & Van Doorn (1990) J. J. Koenderink and a. J. Van Doorn. Receptive field families. Biological Cybernetics, 63(4):291–297, 1990. ISSN 03401200. doi: 10.1007/BF00203452.
  • Krajsek & Mester (2007) Kai Krajsek and Rudolf Mester. A Unified Theory for Steerable and Quadrature Filters. Communications in Computer and Information Science, 4 CCIS:201–214, 2007. ISSN 18650929. doi: 10.1007/978-3-540-75274-5˙13.
  • Krizhevsky (2009) Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Technical report, 2009.
  • Lenc & Vedaldi (2015) K. Lenc and A. Vedaldi. Understanding image representations by measuring their equivariance and equivalence. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2015.
  • Lenz (1989) Reiner Lenz.

    Group-theoretical model of feature extraction.

    6(6):827–834, 1989.
  • Mackey (1952) George W Mackey. Induced Representations of Locally Compact Groups I. 55(1):101–139, 1952.
  • Mackey (1953) George W Mackey. Induced Representations of Locally Compact Groups II. The Frobenius Reciprocity Theorem. 58(2):193–221, 1953.
  • Mackey (1968) George W. Mackey. Induced Representations of Groups and Quantum Mechanics. W.A. Benjamin Inc., New York-Amsterdam, 1968.
  • Mallat (2012) Stephane Mallat. Group Invariant Scattering. Communications in Pure and Applied Mathematics, 65(10):1331–1398, 2012.
  • Marcos et al. (2016) Diego Marcos, Michele Volpi, and Devis Tuia. Learning rotation invariant convolutional filters for texture classification. pp.  6, 2016. URL http://arxiv.org/abs/1604.06720.
  • Olah (2015) Chris Olah. Neural Networks, Types, and Functional Programming, 2015. URL https://colah.github.io/posts/2015-09-NN-Types-FP/.
  • Oyallon & Mallat (2015) E. Oyallon and S. Mallat. Deep Roto-Translation Scattering for Object Classification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2865—-2873, 2015.
  • Radford et al. (2015) Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv, pp. 1–15, 2015. ISSN 0004-6361. doi: 10.1051/0004-6361/201527329. URL http://arxiv.org/abs/1511.06434.
  • Rasmus et al. (2016) Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, and Tapani Raiko. Semi-supervised learning with Ladder Networks. pp. 1–9, 2016. ISSN 10495258.
  • Reeder (2014) Mark Reeder. Notes on representations of finite groups, 2014. URL https://www2.bc.edu/{~}reederma/RepThy.pdf.
  • Reisert (2008) Marco Reisert. Group Integration Techniques in Pattern Analysis: A Kernel View. PhD thesis, Albert-Ludwigs-University, 2008.
  • Serre (1977) Jean-Pierre Serre. Linear Representations of Finite Groups, 1977.
  • Shang et al. (2016) Wenling Shang, Kihyuk Sohn, Diogo Almeida, and Honglak Lee.

    Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units.

    In International Conference on Machine Learning (ICML), volume 48, 2016.
  • Sifre & Mallat (2013) Laurent Sifre and Stephane Mallat. Rotation, Scaling and Deformation Invariant Scattering for Texture Discrimination. IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2013.
  • Simoncelli & Freeman (1995) E.P. Simoncelli and W.T. Freeman. The steerable pyramid: a flexible architecture for multi-scale derivative computation. Proceedings of the International Conference on Image Processing, 3:444–447, 1995. ISSN 0818673109. doi: 10.1109/ICIP.1995.537667.
  • Skibbe (2013) H. Skibbe.

    Spherical Tensor Algebra for Biomedical Image Analysis

    .
    PhD thesis, Albert-Ludwigs-Universitat Freiburg im Breisgau, 2013.
  • Taylor (1986) Michael E Taylor. Noncommutative Harmonic Analysis. 1986. ISBN 0821815237.
  • Teo (1998) Patrick Cheng-San Teo. Theory and Applications of Steerable Functions. PhD thesis, Stanford University, 1998.
  • Tieleman (2014) Tijmen Tieleman. Optimizing Neural Networks that Generate Images. PhD thesis, 2014.
  • Weyl (1939) Hermann Weyl. The classical groups: their invariants and representations. Princeton University Press, 1939.
  • Zagoruyko & Komodakis (2016) S. Zagoruyko and N. Komodakis. Wide Residual Networks. arXiv:1605.07146, 2016.

Appendix A: Induction

In this section we will show that a stack of feature maps produced by convolution with an -equivariant filter bank transforms according to the induced representation. That is, we will derive eq. 5, repeated here for convenience:

(14)

In the main text, we mentioned that can be interpreted as a point or as a translation. Here we make this difference explicit, by writing for a point and for a translation. (The operation defines a section of the projection map that forgets the non-translational part of the transformation (Kaniuth & Taylor, 2013)).

With this notation, the convolution is defined as:

(15)

Although the induced representation can be described in a more general setting, we will use an explicit matrix representation of to make it easier to check our computations. A general element of is written as:

(16)

Where is the matrix representation of (e.g. a rotation / reflection matrix), and is a translation vector. The section we use is:

(17)

Finally, we will distinguish the action of on itself, written for (implemented as matrix-matrix multiplication) and its action on , written for and (implemented as matrix-vector multiplication by adding a homogeneous coordinate to ).

To keep notation uncluttered, we will write and . In full detail, the derivation of the transformation law for the feature space induced by proceeds as follows:

(18)

The last line is the result shown in the paper. The justification of each step is:

  1. Definition of

  2. is a homomorphism / group representation

  3. is the identity, so can always multiply by it

  4. is a homomorphism / group representation

  5. is equivariant to .

  6. Invert twice.

  7. can be checked by multiplying the matrices / vectors.

  8. Definition of

The derivation above is somewhat involved and messy, so the reader may prefer to think geometrically (using the figures in the paper) instead of algebraically. This complexity is an artifact of the lack of abstraction in our presentation. The induced representation is really a very natural object to consider (abstractly, it is the “adjoint functor” to the restriction functor. A more abstract treatment of the induced representation can be found in Serre (1977); Mackey (1952); Reeder (2014). A treatment that is close to our own, but more general is the “alternate description” found on page 49 of Kaniuth & Taylor (2013).

Appendix B: Relation to Group Equivariant CNNs

In this section we show that the recently introduced Group Equivariant Convolutional Networks (G-CNNs, Cohen & Welling (2016)) are a special kind of steerable CNN. Specifically, a G-CNN is a steerable CNN with regular capsules.

In a G-CNN, the feature maps (except those of the input) are thought of as functions instead of functions on the plane , as we do here. It is shown that the feature maps transform according to

(19)

This defines a linear representation of known as the regular representation. It is easy to see that the regular representation is naturally realized by permutation matrices. Furthermore, it is known that the regular representation of is induced by the regular representation of . The latter is defined in Appendix C, and is what we refer to as “regular capsules” in the paper.

Appendix C: Regular and Quotient Features

Let be a finite group. A subgroup of is a subset that is also itself a group (i.e. closed under composition and inverses). The (left) coset of a subgroup in are the sets . The cosets are disjoint and jointly cover the whole group (i.e. they partition ). The set of all cosets of in is denoted , and is also called the quotient of by .

The coset space caries a natural left action by . Let , then .

This action translates into an action on the space of functions on . Let denote the space of functions . Then we have the following representation of :

(20)

The function attaches a value to every coset. The -action permutes these values, because it permutes the cosets. Hence, can be realized by permutation matrices. For small groups the explicit computations can easily be done by hand, while for large groups this task can be automated.

In this way, we get one permutation representation for each subgroup of . In particular, for the subgroup (the trivial subgroup containing only the identity ), we have . The representation in the space of functions on is known as the “regular representation”. Using such regular representations in a steerable CNN is equivalent to using the group convolutions introduced in Cohen & Welling (2016), so steerable CNNs are a strict generalization of G-CNNs. At the other extreme, we take , which gives the quotient , the trivial group, which gives the trivial representation .

For the roto-reflection group , we have the following subgroups and associated quotient features

Subgroup quotient feature name dimensionality
regular 8
qm 4
qmr 4
qmr2 4
qmr3 4
r2 4
r 2
r2m 2
r2mr 2
A1 1