pose matrices are estimated to capture the spatial relations between the detected parts and a whole. Unlike CNNs the performance of CapsNet on real and more complex data has not been verified yet, partially due to the high computation that prevents it from being applicable widely.
In fact exploring such invariant representations for object recognition has a long history in the literature of both neural science and computer vision. For instance, in Isik observed that object recognition in the human visual system is developed in stages with invariance to smaller transformations arising before invariance to larger transformations, which supports the design of feed-forward hierarchical models of invariant object recognition. In computer vision part-based representation () is one of the most popular invariant object representations that considers an object as a graph where each node represents an object part and each edge represents the (spatial) relation between the parts. Conceptually part-based representation is view-invariant in 3D and affine-invariant (invariant to translation, scale, and rotation) in 2D. Although the complexity of part-based models in inference on general graphs could be very high , for tree structures such as star graphs this complexity can be linear to the number of parts . Girshick  has shown that such star graph part-based models can be interpreted as CNNs.
In this paper we aim to study the following problem: Can we design and train CNNs to learn affine-invariant representations efficiently, effectively, and robustly?
Motivation. Besides CapsNet, we are also partially motivated by the works such as [2, 50] that utilize a priori knowledge as guidance to design and train neural networks efficiently and effectively. For instance, 
proposed the notion of “neural module” to conduct certain semantic functionality using deep learning for visual question answering. Such modules can be reusable to comprise complex networks to perform certain tasks. The semantics and the network design here come from the compositional linguistic structure of questions. Thanks to these modules, the network design is much more understandable by checking whether the outputs of a module follow what we expect.
proposed encoding network weights as well as the architecture into a Tikhonov regularizer by lifting the ReLU activations, and accordingly developed a block coordinate descent algorithm for fast training of deep models.
In contrast to a posteriori knowledge such as visualization of learned filters in , a priori knowledge based approaches are more likely to be model-driven so that one can derive by reasoning alone, rather than being data-driven, in terms of building automatic systems such as neural networks. In this way, the networks with a priori knowledge are expected to be much easier to be understood by human, and their performance is more predictable and robust.
Contributions. Thanks to the convolution, CNNs are translation equivariant. This capability has contributed significantly to their widespread success. They, however, are not efficient or effective to capture the scaled or rotated objects, and thus enhancing CNNs with the capability of learning scale-invariant and rotation-invariant features is very challenging but appealing.
In this paper we design a novel deep multi-scale maxout  CNN to learn scale-invariant representations. We then propose training this network end-to-end with a novel rotation-invariant regularizer. To our best knowledge, we are the first to propose such regularization for handling rotation in deep learning. Note that we take the multi-scale maxout block and the regularizer as a priori knowledge for learning affine-invariant representations. Empirically we demonstrate the benefit of integrating such knowledge with network design and training, leading to better generalization, data-efficiency, and robustness of deep models than the state-of-the-art in learning affine-invariant representations.
2 Related Work
Scale-Invariant Networks. One simple way to handle the scale issue is using image pyramid in deep learning . Some works [44, 24, 38] are particularly interested in extracting scale-invariant features from the networks. More broadly, multi-scale convolutional filters (or multi-kernels) are employed in networks [30, 39, 3, 28]
. The inception module in GoogLeNet is able to capture multi-scale information with maxout units. A similar idea has been explored in TI-Pooling . ResNet  manages to capture multi-scale information using skip connection. Multi-scale DenseNet 
proposes using a two-dimensional multi-scale convolutional network architecture that maintains coarse-level and fine-level features throughout the network. Note that with the increase of the number of hidden layers all the CNNs tend to extract deep features within multiple scales to a certain degree.
Rotation-Invariant Networks. Recently quite a few works focus on learning rotation-invariant features using deep networks. Cohen and Welling  proposed Group equivariant CNNs (GCNN) by exploiting larger groups of symmetries, including rotations and reflections, in the convolutional layers. Worrall  proposed Harmonic Networks by replacing regular CNN filters with circular harmonics and returning a maximal response and orientation for every receptive field patch. Both works argue that rotating the data point is equivalent to rotating the filters. Therefore, they manage to learn rotation-invariant filters in a continuous space. In contrast, some other works such as [51, 49, 32, 18, 48, 33, 40] propose learning the filters in a discretized space by quantizing the rotation angles with predefined numbers (from 0 to , step by ) so that the final features encode the rotation information. For instance, Rotation Equivariant Vector Field Networks (RotEqNet)  was proposed by applying each convolutional filter at multiple orientations and returning a vector field that represents magnitude and angle of the highest scoring orientation at every spatial location.
Interpretable Networks with A Priori Knowledge. Andreas 
proposed neural modules to mimic some basic semantic functionality using deep neural networks, based on which larger networks are constructed for specific tasks using the knowledge from natural language processing (NLP) such as grammar graphs as guidance. Belbute-Peres proposed embedding structured physics knowledge into larger systems as a differentiable physics engine that can be integrated as module in deep neural networks for end-to-end learning. Amos 
proposed using Model Predictive Control (MPC) as a differentiable policy class for reinforcement learning in continuous state and action spaces that leverages and combines the advantages of model-free and model-based approaches. They also showed that their MPC policies are significantly more data-efficient than a generic neural network.
Other Related Networks. Dilated convolution  supports exponential expansion of the receptive field (window) without loss of resolution or coverage and thus can help networks capture multi-scale information. Deformable Convolutional Networks (DCN) 
proposed a more flexible convolutional operator that introduces pixel-level deformation, estimated by another network, into 2D convolution. Spatial Transformer Networks (STN) learn affine-invariant representations by sequential applications of a localization network, a parameterized grid generator and a sampler. Dynamic Filter Networks (DFN) [23, 43] was proposed to learn to generate (local) filters dynamically conditioned on an input that potentially can be affine-invariant.
Data Augmentation. It is a well-known technique in deep learning for reducing the filter bias during learning by generating more (fake) data samples based on some predefined rules (or transformations) such as translation, scaling, rotation and random cropping. Trained with such augmented data, one can expect that the networks may be more robust to the transformations. For instance, TI-Pooling  assembles all the transformed instances from the same data point in a pool and takes the maximal response for classification. STN  learns to predict a transformation matrix for each observation that can be used to augment data.
From the perspective of the feature space, affine-invariant representations for an object under different transformations with translation, scale, and rotation should be mapped into a single point in the feature space ideally, or a compact cluster. To achieve this, several loss functions were proposed. For instance, the center loss enforces the features from the same class to be close to the corresponding cluster center. Similar ideas have been explored in few-shot learning with neural networks  as well. In fact well-designed networks can generate compactly clustered features for each class with good discrimination, even if trained without such specific losses. Also such losses do not aim to learn affine-invariant features, explicitly or implicitly. Empirically we do not observe any improvement using the center loss over the cross-entropy loss, and thus we do not report the performance using the center loss.
In contrast to these previous works, we handle scale and rotation jointly in CNNs for learning affine-invariant representations. We introduce a priori knowledge into network design and training as interpretability in deep models. We demonstrate better generalization, data-efficiency, and robustness of our approach than the state-of-the-art networks.
3 Our Approach
Overview. To achieve translation and scale invariance, we propose a multi-scale maxout block as shown in Fig. 1(a), a set of filters with different predefined sizes are applied to images with convolution, and then the maxout operator is used to locate the maximum response per pixel among the filters. Mathematically this block can be formulated as
where denotes the convolution operator, denotes a 2D spatial filter, denotes an image, and denotes the scalar output of the convolution at pixel .
In contrast to rotation-invariant networks such as RotEqNet, there is no rotation constraint on the design of network architectures including filters. Instead, we impose such constraint on learning with our rotation-invariant regularizer. Similar to other regularizers, ours encodes the prior knowledge of filters that we would like to learn (denoted as the template in Fig. 1(b)). Inspired by Harmonic Networks, ideally the learned filters should be symmetric along all possible directions, like circles. Due to the discretization of images, however, we propose an alternative to represent such symmetry that can be learned efficiently and effectively.
Learning Problem. In this paper we consider the following optimization problem:
where denotes the training data with image and its class label , denotes the parameters for the network defined by function , denotes the templates in the feasible space that should match with, denotes the loss function, denotes the weight decay with norm, denotes the regularizer that measures the difference between and , and are predefined constants. Different from conventional CNNs, here we propose learning not only the network weights but also the matching templates within the feasible space that encodes certain constraints on the templates such as symmetry. In the sequel we will explain how to effectively design a scale-invariant network , and how to efficiently construct a rotation-invariant regularizer .
3.1 Network Architecture
We illustrate our network in Fig. 2
, where all the operations are basic and widely used in CNNs such as batch normalization (BN), and “+” denotes one operation followed by the other. Due to the small image sizes (
pixels) in our experiments, we conduct downsampling for three times only using max-pooling. In each block the first convolutional layer is responsible for mapping the inputs into a higher dimensional space,
, and the other two convolutional layers learn the (linear) transformation in the same space,. For grayscale images, the input dimension is changed from 3 to 1.
Different from existing networks such as GoogLeNet and TI-Pooling, we propose extracting features within different scales using a sequence of convolutional operations. Considering the trade-off between computational efficiency and accuracy, we only exploit three scales, , using fixed filter size of in each convolutional layer, and use maxout to select a scale with the maximum response. This scale is taken as the best one to fit for the object. In fact we use two and three convolutions to approximate the responses with filter sizes of and , respectively, for efficient computation. With the increase of the network depth, information within larger scales (receptive field) can be extracted as well.
We also find that the network depth is more important than the network width the accuracy. It has been demonstrated in Wide Residual Networks (WRN)  that wider networks can improve the performance. In contrast to the parallel mechanism in WRN, in each block we apply convolutions sequentially. Note that the proposed mechanism can be integrated with other networks as well.
3.2 Training with Rotation-Invariant Regularizer
3.2.1 General Formulation
As illustrated in Fig. 1(b), in order to enforce the filters to satisfy certain spatial properties such as rotation invariance, the templates here need to be constructed in certain way to encode such properties. Therefore, we propose the following general formulation for rotation-invariant regularizers:
where denotes the index of a 2D spatial filter, denotes the expectation over all 2D spatial filters, denotes the 2D-index of a weight in the -th filter with size , , denotes the ceiling function, denotes a distance function, denotes a hash function that determines the weight pattern in the templates for matching, and correspondingly is a learnable function.
Choices of Distance Function . In general we do not have any explicit requirement on . For instance, it can be -norm, -norm, or group sparsity norm such as -norm. Moreover, this distance measure can be conducted in not only Euclidean but also non-Euclidean spaces such as manifold regularization , which will be appreciated in geometric deep learning .
Choices of Hash Function . For rotation invariance, ideally it should be a circular pattern defined by
in a continuous space. Due to the discretization of images, however, it hardly forms circles in filters without interpolation which will significantly increase the computational complexity in convolution. Instead, we propose learning some simpler patterns that can be used to approximate circles. For instance, we illustrate two exemplar patterns for filters with sizein Fig. 3, where the patterns in (a) and (b) are defined by and , respectively, and is the floor function. Other hash functions may be also applicable here, but finding the best one is outside the scope of this paper.
3.2.2 An Empirical Showcase
In this section we will show a specific regularizer that we use in our experiments later. For the simplicity and efficiency, we decide to employ the least square loss for and the pattern in Fig. 3(a) for without fine-tuning the accuracy on the data sets.
Specifically we define our empirical rotation-invariant regularizer as follows:
where is a scalar.
Similar to the center loss in 
, here we aims to reduce the variance among the weights in each 2D spatial filter with
pixels, on average. Meanwhile, the patterns in the templates are updated automatically with the mean of the weights. In this way we can learn filters that can better approximate 2D spatial circular patterns for rotation invariance. In backpropagation, sincein Eq. 4 is always differentiable
, any deep learning solver such as stochastic gradient descent (SGD) can be used to train the network with our rotation-invariant regularizer.
Discussion. Recall that Fig. 3 essentially encodes the structural patterns that we expect for learned filters to handle rotation. One may argue that we can enforce such structures into learning strictly by converting the regularizer in Eq. 2 into constraints and solving a constrained nonconvex optimization problem. We decide not to do so because potentially the new problem will be much harder to be solved than the one in Eq. 2. Besides since the structures in Fig. 3 are already the approximation of the circular structure, we do not necessarily guarantee that all the weights with the same color are identical. More freedom as in regularization may lead to a compensation for the loss of the structural approximation in terms of accuracy.
|T. S. (F)||98.87||94.79||94.02||97.47||91.47||40.87||88.35||95.15||91.16||68.29|
|T. S. (10)||84.150.48||26.430.85||27.570.94||47.311.37||29.660.63||27.721.74||28.202.06||54.350.56||32.490.29||28.520.26|
4.1 Benchmark Data with Affine Transformations
4.1.1 Experimental Setup
affNIST is created by applying random small affine transformations to each grayscale image in MNIST  (10 classes). It is designed for testing the tolerance of an algorithm to such transformations. There are 60K training and validation samples and 10K test samples in affNIST with size pixels. To facilitate the data processing in training, we resize all the images to pixels.
MNIST-rot  is another variant of MNIST, where a random rotation between and is applied to each image. It has 10K/2K/50K training/validation/test samples. To facilitate the data processing in training, we again resize all the grayscale images to pixels.
Traffic Sign contains 43 classes with unbalanced class frequencies, 34799 training RGB images, and 12630 testing RGB images with size of pixels. It reflects the strong variations in visual appearance of signs due to distance, illumination, weather conditions, partial occlusions, and rotations, leading to a very challenging recognition problem.
Networks. We compare our approach with some state-of-the-art networks with similar model complexity to ours, RotEqNet 111https://github.com/COGMAR/RotEqNet, Harmonics 222https://github.com/deworrall92/harmonicConvolutions, TI-Pooling 333https://github.com/dlaptev/TI-pooling, GCNN 444https://github.com/tscohen/gconv_experiments, STN 555https://github.com/kevinzakka/spatial-transformer-network, ResNet-32 666https://github.com/tensorflow/models/tree/master/research/resnet, CapsNet 777https://github.com/naturomics/CapsNet-Tensorflow, GoogLeNet 888https://github.com/flyyufelix/cnn_finetune/blob/master/googlenet, and DCN 999https://github.com/felixlaumon/deform-conv. Specifically TI-Pooling is designed for scale invariance, RotEqNet, Harmonics, and GCNN are designed for rotation invariance. We use the public code for our comparison.
We implement our default network using Tensorflow and following the architecture in Fig.2 with the default numbers of channels. Note that the implementation of the networks in our comparison may be different, (GCNNChainer; GoogLeNet, DCNKeras; CapsNet, TI-Pooling, Harmonics, STN, ResNet-32Tensorflow; RotEqNetPytorch) which may lead to various computational efficiency.
Training Protocols. We tune each network to report its best performance on the data sets. By default we train the networks for 42000 iterations with mini-batch size 100, weight decay , and momentum 0.9. The global learning rate is set to 0.01 or 0.0001 when trained using all or a few training images per class, respectively, and it is reduced by 0.1 twice at the 20000 iteration and the 30000 iteration as well. For each network the hyper-parameter tuning starts with the default setting, and the best setting may be slightly different from the default. We follow this default setting in all the experiments and set . The numbers reported here are the average over three trials.
To do fair comparison, we follow the settings for data augmentation in the publications of most of the competitors. Specifically, by default on affNIST and Traffic Sign we do not employ data augmentation, but on MNIST-rot we do.
Better Generalization, Data-Efficiency, & Robustness. We summarize the test accuracy comparison in Table 1. As we see, using either all or 10 random training/validation images per class, our method consistently outperforms the competitors on the three data sets with a margin of 1.96% or 30.37%. Using the full set the stds of all the methods are small and similar, and thus we do not show the numbers.
To better demonstrate the data-efficiency, we illustrate test accuracy comparison using few random training/validation images per class in Fig. 4. Overall, our method works significantly better than the competitors with large margins. Note that on MNIST-rot our performance is worse than some of the competitors when using 1 or 2 images per class for training. A possible reason may come from data augmentation. Another reason is that some of the networks are designed specifically for rotation invariance and this data set just fits for this purpose. With the increase of the numbers of training samples, however, our method again beats all the competitors. It is worth mentioning that in Harmonics Networks , similar experiments on MNIST-rot were conducted to show data-efficiency and robustness of the approach. Using of the full training/validation data Harmonics lost about 3%. Here we compare different networks using less than
to show the superiority of our method over the others. Empirically we observe that our method can work very robustly with standard deviation of less than 1%, in general.
In addition, we can further improve our performance using data augmentation. In Fig. 5 we illustrate the performance comparison on Traffic Sign with or without data augmentation. As we see, using 10 random training images per class we can achieve 87.84% with 3.69% improvement.
Training & Testing Behavior. We illustrate the training and test accuracy behavior of each network on affNIST with the full training set in Fig. 6. As we see all the networks are well trained with convergence. In the testing stage our network converge faster than most of the competitors with better accuracy. Similar observations can be made in training as well. We make similar observations on the other two data sets. From this perspective, we can also demonstrate that our method has better generalization.
|Traffic Sign (F)||98.42||98.87||98.42|
Effect of Multi-Scale Maxout. In Table 2 we list the test accuracy using different multi-scale settings, while fixing the parameter . As we see the changes between different settings are really marginal, which again demonstrates the good generalization and robustness of our method. Considering the trade-off between accuracy and computational efficiency, we choose 3[Conv+BN] as our default setting used in Fig. 2.
Effect of Rotation-Invariant Regularization. We illustrate such effect in Fig. 7 while using the default multi-scale maxout setting. With different values where
means no our regularizer, we can see that using the full set for training our performances are almost identical. This is probably because the number of training images is sufficiently large to capture the scaling and rotation information already. Using a few training images, 10 per class, the benefit of using our rotation-invariant regularizer becomes much clearer, especially on affNIST. Usingas default, there is 1.52%, on average, improvement over that without our regularizer.
We also observe that our rotation-invariant regularizer can achieve very small numbers empirically. For instance, on affNIST the value is , indicating that our learned filters are very close to the spatial circular patterns.
Behavior with Different Numbers of Parameters. We reduce the number of parameters in our network by channel-wise shrinking. Specifically in ascending order of number of parameters, the corresponding network channels are set as follows: [4,4,4,4,4], [16,16,16,16,16], [32,32,32,32,32], [32,64,64,64,64], [32,64,128,128,128], [32,64,128,256,256], [32,64,128,256,512], followed by an FC of 1024 nodes and another FC for classification.
We first compare our performance using different numbers with the competitors in Fig. 8. We can see that after about 200K parameters the improvement of our approach becomes slow, while before 200K our performance drops significantly with the decrease of numbers of parameters. In the figure 200K corresponds to the setting [32,64,64,64,64], whose performance is, or on par with, the best already.
We then compare the running time per iteration in both training and testing stages in Fig. 9. We run all the code on the same machine with a Titan XP GPU. In training the running time includes the feedforward calculation and backpropagation inference (dominating training time), while in testing the running time only includes the feedforward calculation. As we see, in both training and testing our computational complexity grows exponentially, in general, with the number of parameters (note that the y-axis is in log-scale). Although some codes are written in different deep learning environments, we can still do a fair comparison with Harmonics and STN. Harmonics has fewer parameters, leading to faster backpropagation and thus shorter training time. The operations in Harmonic, however, is more complex than ours, and thus with a similar number of parameters our method is faster in testing. The operations in STN are much simpler than both Harmonics and ours, leading to faster running speed in both training and testing. Note that in order to further improve our computational efficiency, we can simply remove one Conv+BN in the multi-scale maxout block that can achieve similar accuracy (see Table 2).
4.2 Comparison on CIFAR-100 
Beyond the benchmark data sets with affine transformations, we also test our method on “natural” images. For instance, we illustrate our comparison results on CIFAR-100 in Fig. 10. CIFAR-100 contains 60,000 color images in 100 different classes, 500/100 training/testing images per class. Following the same training protocol, we randomly sample a few images per class to further demonstrate our superiority, especially on data-efficiency.
As we see in Fig. 10, our method significantly and consistently outperforms the competitors with a few training samples. For instance, using 100 samples per class ours achieves 52.67 test accuracy with the improvement of almost 10 over ResNet-32 (the second best). Using the full training set, ours achieves 78.33 that is slightly lower than WRN-28-10 (80.75), but higher than ResNet-32 (76.7) and GoogleNet (78.03), and dramatically higher than the other networks that learn the scale or rotation invariant representations such as TI-Pooling (31.77).
In this paper we propose a novel multi-scale maxout deep CNN and a novel rotation-invariant regularizer to learn affine-invariant representations for object recognition in images. Multi-scale convolution with maxout can handle translation and scale, and enforcing 2D filters to approximate circular patterns by our regularization can manage to induce invariance to rotation. By taking these as a priori knowledge, we can easily interpret our network architecture as well as its training procedure. We test our method on three benchmark data sets as well as CIFAR-100 to demonstrate its superiority over the state-of-the-art in terms of generalization, data-efficiency, and robustness. Especially, with a few training samples our method can work significantly better, leading to the hypothesis that the introduction of a priori knowledge into deep learning can effectively reduce the amount of data to accomplish the tasks. We are planning to explore more on this topic in our future work.
-  B. Amos, I. Jimenez, J. Sacks, B. Boots, and J. Z. Kolter. Differentiable mpc for end-to-end planning and control. In NIPS, pages 8299–8310, 2018.
-  J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Neural module networks. In CVPR, pages 39–48, 2016.
-  N. Audebert, B. Le Saux, and S. Lefèvre. Semantic segmentation of earth observation data using multimodal and multi-scale deep networks. In ACCV, pages 180–196, 2016.
-  M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. JMLR, 7(Nov):2399–2434, 2006.
-  M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017.
-  T. Cohen and M. Welling. Group equivariant convolutional networks. In ICML, pages 2990–2999, 2016.
-  D. Crandall, P. Felzenszwalb, and D. Huttenlocher. Spatial priors for part-based recognition using statistical models. In CVPR, volume 1, pages 10–17, 2005.
-  J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolutional networks. In CVPR, pages 764–773, 2017.
-  F. de Avila Belbute-Peres, K. Smith, K. Allen, J. Tenenbaum, and J. Z. Kolter. End-to-end differentiable physics for learning and control. In NIPS, pages 7178–7189, 2018.
-  P. F. Felzenszwalb and D. P. Huttenlocher. Distance transforms of sampled functions. Theory Of Computing, 8:415–428, 2012.
-  M. A. Fischler and R. A. Elschlager. The representation and matching of pictorial structures. IEEE Transactions on computers, 100(1):67–92, 1973.
-  R. Girshick, F. Iandola, T. Darrell, and J. Malik. Deformable part models are convolutional neural networks. In CVPR, pages 437–446, 2015.
-  I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In ICML, pages III–1319, 2013.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
-  G. Hinton. Taking inverse graphics seriously. https://www.cs.toronto.edu/ hinton/csc2535/notes/lec6b.pdf.
-  G. Hinton, N. Frosst, and S. Sabour. Matrix capsules with em routing. In ICLR, 2018.
-  G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. In International Conference on Artificial Neural Networks, pages 44–51. Springer, 2011.
-  E. Hoogeboom, J. W. Peters, T. S. Cohen, and M. Welling. Hexaconv. arXiv preprint arXiv:1803.02108, 2018.
-  G. Huang, D. Chen, T. Li, F. Wu, L. van der Maaten, and K. Q. Weinberger. Multi-scale dense networks for resource efficient image classification. arXiv preprint arXiv:1703.09844, 2017.
-  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pages 448–456, 2015.
-  L. Isik, E. M. Meyers, J. Z. Leibo, and T. Poggio. The dynamics of invariant object recognition in the human visual system. Journal of neurophysiology, 111(1):91–102, 2013.
-  M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In NIPS, pages 2017–2025, 2015.
-  X. Jia, B. De Brabandere, T. Tuytelaars, and L. V. Gool. Dynamic filter networks. In NIPS, pages 667–675, 2016.
-  A. Kanazawa, A. Sharma, and D. Jacobs. Locally scale-invariant convolutional neural networks. arXiv preprint arXiv:1412.5104, 2014.
-  A. Krizhevsky, V. Nair, and G. Hinton. Cifar-100 (canadian institute for advanced research).
-  D. Laptev, N. Savinov, J. M. Buhmann, and M. Pollefeys. Ti-pooling: transformation-invariant pooling for feature learning in convolutional neural networks. In CVPR, pages 289–297, 2016.
-  H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of deep architectures on problems with many factors of variation. In ICML, pages 473–480, 2007.
-  G. Larsson, M. Maire, and G. Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016.
The mnist database of handwritten digits.http://yann.lecun.com/exdb/mnist/.
-  Z. Liao and G. Carneiro. Competitive multi-scale convolution. arXiv preprint arXiv:1511.05635, 2015.
-  T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In CVPR, pages 2117–2125, 2017.
-  S. Luan, C. Chen, B. Zhang, J. Han, and J. Liu. Gabor convolutional networks. TIP, 2018.
-  D. Marcos, M. Volpi, N. Komodakis, and D. Tuia. Rotation equivariant vector field networks. In ICCV, pages 5048–5057, 2017.
-  S. Sabour, N. Frosst, and G. E. Hinton. Dynamic routing between capsules. In NIPS, pages 3859–3869, 2017.
-  J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot learning. In NIPS, pages 4077–4087, 2017.
-  J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. The German Traffic Sign Recognition Benchmark: A multi-class classification competition. In IJCNN, pages 1453–1460, 2011.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, pages 1–9, 2015.
-  R. Takahashi, T. Matsubara, and K. Uehara. Scale-invariant recognition by weight-shared cnns in parallel. In ACML, pages 295–310, 2017.
-  J. Wang, Z. Wei, T. Zhang, and W. Zeng. Deeply-fused nets. arXiv preprint arXiv:1605.07716, 2016.
-  M. Weiler, F. A. Hamprecht, and M. Storath. Learning steerable filters for rotation equivariant cnns. In CVPR, pages 849–858, 2018.
Y. Wen, K. Zhang, Z. Li, and Y. Qiao.
A discriminative feature learning approach for deep face recognition.In ECCV, pages 499–515, 2016.
-  D. E. Worrall, S. J. Garbin, D. Turmukhambetov, and G. J. Brostow. Harmonic networks: Deep translation and rotation equivariance. In CVPR, volume 2, 2017.
-  J. Wu, D. Li, Y. Yang, C. Bajaj, and X. Ji. Dynamic filtering with large sampling field for convnets. In ECCV, pages 185–200, 2018.
-  Y. Xu, T. Xiao, J. Zhang, K. Yang, and Z. Zhang. Scale-invariant convolutional neural networks. arXiv preprint arXiv:1411.6369, 2014.
-  F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015.
-  S. Zagoruyko and N. Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
-  Q. Zhang, Y. Nian Wu, and S.-C. Zhu. Interpretable convolutional neural networks. In CVPR, pages 8827–8836, 2018.
-  T. Zhang, G.-J. Qi, B. Xiao, and J. Wang. Interleaved group convolutions. In CVPR, 2017.
-  X. Zhang, L. Liu, Y. Xie, J. Chen, L. Wu, and M. Pietikäinen. Rotation invariant local binary convolution neural networks. In ICCV Workshops, pages 1210–1219, 2017.
-  Z. Zhang and M. Brand. Convergent block coordinate descent for training tikhonov regularized deep neural networks. In NIPS, pages 1721–1730. 2017.
-  Y. Zhou, Q. Ye, Q. Qiu, and J. Jiao. Oriented response networks. In CVPR, pages 4961–4970, 2017.