Patch Reordering: a Novel Way to Achieve Rotation and Translation Invariance in Convolutional Neural Networks

11/28/2019 ∙ by Xu Shen, et al. ∙ 21

Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance on many visual recognition tasks. However, the combination of convolution and pooling operations only shows invariance to small local location changes in meaningful objects in input. Sometimes, such networks are trained using data augmentation to encode this invariance into the parameters, which restricts the capacity of the model to learn the content of these objects. A more efficient use of the parameter budget is to encode rotation or translation invariance into the model architecture, which relieves the model from the need to learn them. To enable the model to focus on learning the content of objects other than their locations, we propose to conduct patch ranking of the feature maps before feeding them into the next layer. When patch ranking is combined with convolution and pooling operations, we obtain consistent representations despite the location of meaningful objects in input. We show that the patch ranking module improves the performance of the CNN on many benchmark tasks, including MNIST digit recognition, large-scale image recognition, and image retrieval. The code is available at .



There are no comments yet.


page 4

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


In recent years, convolutional neural networks (CNNs) have achieved state-of-the-art performance on many computer vision tasks, including image recognition

[26, 23, 10], semantic segmentation [20]

, image captioning

[14, 5, 6], action recognition [8, 24], and video captioning [27, 22]. The success of CNNs comes from their ability to learn the two-dimensional structures of images for which objects and patterns may appear at different locations. To detect and learn patterns despite their locations, the weights of local filters are shared when applied to different positions in the image.

Since distortions or shifts of the input can cause the positions of salient features to vary, weight sharing is very important for CNNs to detect invariant elementary features regardless of location changes of these features [18]. In addition, pooling also reduces the sensitivity of the output to small local shifts and distortions by reducing the resolutions of input feature maps. However, another important property of weight sharing or pooling is that the location of detected features in the output feature maps is identical to that of the corresponding local patch in the input feature maps. As a result, the location change of the input visual patterns in lower layers will propagate to higher convolutional layers. Due to the typically small local spatial support for pooling (e.g., ) and convolution (e.g., kernel size), large global location changes of patterns in input (e.g., global rotation or translation of objects) will even propagate to the feature maps of the final convolutional layer (as shown in Fig. 1). Consequently, the following fully connected layers have to learn the location invariance to produce consistent predictions or representations, which restricts the use of the parameter budget for achieving more powerful outputs.

In this paper, we introduce a Patch Reordering (PR) module that can be embedded into a standard CNN architecture to improve the rotation and translation invariance capabilities. Output feature maps of the convolutional layer are first divided into multiple tiers of non-overlapped local patches at different spatial pyramid levels. We reorder these local patches at each level based on their energy (e.g., L1 or L2 norm of activations of the patch). To retain the spatial consistency of local patterns, we only reorder the patches of a given level locally (i.e., within each single patch of its upper level). In convolutional layers, a location change of the patterns in input feature maps will result in a corresponding location change of the output feature maps, while the local patterns (activations) in the output are equivalent. As a result, ranking these local patterns in a specific order leads to a consistent representation despite the locations of local patterns in input, that is, rotation or translation invariance. The proposed architecture can be inserted into any convolutional layers and allows for end-to-end training of the models for which they are applied. In addition, we do not need any extra training supervision or modification to the training process or any preprocessing of input images.

Figure 1: Large global location changes of patterns in input (e.g., global rotation or translation of objects) will propagate to the feature maps of the final convolutional layer. As a result, the fully connected layers have to encode the invariance of location changes into its parameters, which restricts the capacity of the model to learn the content of these objects.

Related Work

The equivalence and invariance of CNN representations to input image transformations were investigated in [19, 4, 7]. Specifically, Cohen and Welling [4]

showed that a linear transform of a good visual representation was equivalent to a combination of the elementary irreducible representations using the theory of group representations. Lenc and Vedaldi

[19]estimated the linear relationships between representations of the original and transformed images. Gens and Domingos [7] proposed a generalization of CNNs that formed feature maps over arbitrary symmetry groups based on the theory of symmetry groups in [19], resulting in feature maps that were more invariant to symmetry groups. Bruna and Mallat [1] proposed a wavelet scattering network to compute a translation invariant image representation. Local linear transformations were adopted in the feature learning algorithms in [25] for the purpose of transformation-invariant feature learning.

Numerous recent works have focused on introducing spatial invariance in deep learning architectures explicitly. For unsupervised feature learning, Sohn and Lee


presented a transform-invariant restricted Boltzmann machine that compactly represented data by its weights and their transformations, which achieved invariance of the feature representation via probabilistic max pooling. Each hidden unit was augmented with a latent transformation assignment variable that described the selection of the transformed view of the weights associated with the unit in

[15]. In both works, the transformed filters were only applied at the center of the largest receptive field size. In tied convolutional neural networks [17], invariance was learned explicitly by square-root pooling hidden units computed by partially un-tied weights. Here, additional learned parameters were needed when un-tying weights.

The latest two works on incorporating spatial invariance in CNNs are described in [13, 11]. In [13]

, feature maps in CNNs were scaled or rotated to multiple levels, and the same kernel was convolved across the input at each scale. Then, the responses of the convolution at each scale were normalized and pooled at each spatial location to obtain a locally scale-invariant representation. In this model, only limited scales were considered, and extra modules were needed in the feature extraction process. To address different transformation types in input images, Jaderberg

et al. [11] proposed inserting a spatial transformer module between CNN layers, which explicitly transformed an input image into a proper appearance and fed the transformed input into the CNN model.

In conclusion, all aforementioned related works improve the transform invariance of deep learning models by adding extra feature extraction modules, more learnable parameters, or extra transformations on input images, which makes the trained CNN model problem-dependent and not generalizable to other datasets. In contrast, in this paper, we propose a very simple reordering on feature maps during the training of CNN models. No extra feature extraction modules or more learnable parameters are needed. Therefore, it is very easy to apply the trained model to other vision tasks.

Patch Reordering in Convolutional Neural Networks

Weight sharing in CNNs allows feature detectors to detect features regardless of their spatial locations in the image; however, the corresponding location of output patterns varies when subject to location changes of the local patterns in the input. Learning invariant representations causes parameter redundancy problems in current CNN models. In this section, we will reveal this phenomenon and propose the formulation of our Patch Reordering module.

Parameter Redundancy in Convolutional Neural Networks

(a) Similarity of weights in fc6.
(b) Similarity of weights in fc7.
Figure 2: Log histogram of similarity between weights in fc6 (a) and fc7 (b) layers in a CNN with (CNN-Patch-Reordering) and without the (CNN) patch reordering module. In conventional CNNs, the correlation of parameters in fc6 is much higher than that in fc7, while that for PR-CNN is quite consistent. This phenomenon implies that the location change of input visual patterns leads to a higher parameter redundancy in the subsequent layers.

Let denote the output feature maps of a convolutional layer with elements ( for feature maps with height and width ). Each is a

-dimensional input feature vector corresponding to location (

,). If it is followed by a fully connected layer, can be computed by



is a non-linear activation function and

are the weights for location (,). If there is some location change (such as a rotation or translation) of the input features, the resulting new input becomes

. Since there are no value changes (except cropping or padding), for

in any position (,), we can always find its correspondence in the transformed input, i.e. . If the network learns to be invariant under this type of location change, the output (or representation) should remain the same. Specifically,


Then, in the monotonous section of , we have


Since , the aforementioned equation can be simplified as:


Because varies as the input image changes, we have

. That is to say, encoding rotation or translation invariance into CNNs leads to highly correlated parameters in higher layers. Therefore, the capacity of the model decreases. To validate this redundancy in CNN models, we compare the log histogram of cosine similarities between weights in fc6 and fc7 in AlexNet

[16]. Fig. 2 shows that parameter redundancy of the model is significantly reduced because of a more consistent feature map after patch reordering.

Patch Reordering

(a) Pooling layer.
(b) Pooling layer with patch reordering.
Figure 3: Side-by-side comparison of the structure of (a) a conventional convolutional layer and (b) the proposed convolutional layer with patch reordering module. Feature maps are divided into non-overlapped patches at level-. Here, we take and as an example. The four patches of level are first reordered so that patches with higher energy precedes other patches. Then, we repeat this process within each single patch in the previous level (here, we only show the reordering of the first patch in level 2). One visual example of the output feature maps between original and rotated/translated input images with/without patch reordering is illustrate in Supplemental Material.

If one object is located at different positions in two images, the same visual features of the object will locate at different positions in their corresponding convolution feature maps. The feature maps generated by deep convolutional layers are analogous to the feature maps in traditional methods [2, 3]. In those methods, image patches or SIFT vectors are densely extracted and then encoded. These encoded features compose the feature maps and are pooled into a histogram of bins. Reordering of the pooled histogram achieves translation and rotation invariance. Likewise, since the deep convolutional feature maps are the encoded representations of images, reordering can be applied in a similar way.

Since convolutional kernels function as feature detectors, each activation in the output feature maps corresponds to a match of a specific visual pattern. Therefore, when the feature detectors slide through the whole input feature maps, the locations with matched patterns generate very high responses and vice versa. Consequently, the “energy” distribution (L1 norm or L2 norm) of the local patches in the output feature maps presents some heterogeneity. Furthermore, patches with different energies correspond to different parts of the input object. Naturally, if we rank the patches by their energies in a descending or ascending order, regardless of how we change the location of visual patterns by rotation or translation in the input, the output order will be quite consistent. Finally, and rotation- and translation-invariant representation is generated.

Forward Propagation

The details of the patch reordering module are illustrated in Fig. 3. The feature maps are divided into non-overlapped patches at level-. Here, is a predefined parameter (e.g., or ). Then, we rank the patches by energy (L1 or L2 norm) within each patch of level :


The patches are located from the upper left to the lower right in descending order of energy. The offset of each pixel in the patch can be obtained from the gap between the target patch location and the source patch location. Finally, the output feature map can be computed by


Backward Propagation

During the back-propagation process, we simply pass the error from the output pixel to its corresponding input pixel:



In this section, we evaluate our proposed CNN with patch reordering module on several supervised learning tasks, and compare our model with state-of-the-art methods, including traditional CNNs, SI-CNN

[13], and ST-CNN [11]. First, we conduct experiments on the distorted versions of the MNIST handwriting dataset as in [11, 13]. The experimental results show that patch reordering is capable of achieving comparable or better classification performance. Second, to test the effectiveness of patch reordering on CNNs for large-scale real-world image recognition tasks, we compare our model with AlexNet [16]

on ImageNet-2012 dataset. The results demonstrate that patch reordering improves the learning capacity of the model and encodes translation and rotation invariance into the architecture even when trained on raw images only. Finally, to evaluate the generalization ability of the proposed model on other vision tasks with real-world transformations of images, we apply our model to solve the image retrieval task on UK-Bench

[21] dataset. The improvement in the retrieval performance reveals that the proposed model has a good generalization ability and is better at solving real-world transformation variations.

We implement our method using the open-source Caffe framework

[12]. For patch energy, we have tested both L1 and L2 norm and found that they did not show much difference. Our code and model will be available online. For SI-CNN and ST-CNN, we directly report their results from the original papers on MNIST. For ImageNet-2012, since these two methods did not report their results on this dataset, we forked from the github for re-implementation.


In this section, we use the MNIST handwriting dataset to evaluate all deep models. In particular, different neural networks are trained to classify MNIST data that have been transformed via rotation (R) and translation (T). The rotated dataset was generated from rotating digits with a random angle sampled from a uniform distribution

. The translated dataset was generated by randomly locating the digit in a canvas.

Method R T
FCN 2.1 2.9
CNN 1.2 1.3
SI-CNN 0.9 -
ST-CNN 0.8 0.8
PR-CNN(ours) 0.8 0.7
Table 1: Classification error on the transformed MNIST dataset. The different distorted MNIST datasets are R (rotation) and T (translation). All models have the same number of parameters and use the same basic architectures.

Following [11]

, all networks use ReLU activation function and softmax classifiers. All CNN networks have a

convolutional layer (stride

, no padding), a max-pooling layer with stride , a subsequent convolutional layer (stride , no padding), and another max-pooling layer with stride before the final classification layer. All CNN networks have filters per layer. For SI-CNN, convolutional layers are replaced by rotation-invariant layers using six angles from to . For ST-CNN, the spatial transformer module is placed at the beginning of the network. In our patch reordering CNN, the patch reorder module is applied to the second convolutional layer. The feature maps are divided into blocks at level . Here, we set . All networks are trained with SGD for iterations, with a batch size, base learning rate, and no weight decay or dropout. The learning rate was reduced by a factor of every iterations. Weights were initialized randomly, and all networks shared the same random seed.

The experimental results are summarized in Table 1. It shows that our model achieves better performance under translation and comparable performance under rotation. Because our model does not need any extra learnable parameters, feature extraction modules, or transformations on training images, the comparable performance still reflects the superiority of the patch reordering CNN. For ST-CNN, the best results reported in [11] is obtained by training with a more narrow class of transformations selected manually (affine transformations). In our method, we did not optimize with respected to transformation classes. Therefore the comparison is unfair for our PR-CNN. We should compare with the most general ST-CNN defined for a class of projection transformations: 0.8(R) and 0.8(T).


Method Ori R T
CNN 57.1/80.2 36.6/57.7 46.5/70.8
CNN-Data-Aug 56.6/79.8 36.5/58.3 50.0/73.9
SI-CNN 57.2/80.2 36.8/58.9 -
ST-CNN 59.1/81.7 37.3/59.3 51.4/75.3
PR-CNN(ours) 60.4/82.4 40.7/63.3 54.9/78.0
Table 2: Classification accuracy on the ImageNet-2012 validation dataset. The evaluation datasets are Ori (original), R (rotation), and T (translation).

The ImageNet-2012 dataset consists of images from classes and is split into three subsets: training (M), validation (K), and testing (K images with held-out class labels). The classification performance is evaluated using the top-1 and top-5 accuracy. The former is a multi-class classification accuracy. The latter is the main evaluation criterion used in ILSVRC and is defined as the proportion of images whose ground-truth category is not in the top-5 predicted categories. We use this dataset to test the performance of our model on a large-scale image recognition task.

CNN models are trained on raw images and tested on both raw and transformed images. For all transform types, specific transformations are applied to the original images. Then, the transformed images are rescaled to have a smallest image side of pixels. Finally, the center crop is used for test. The rotated (R) dataset is generated by randomly rotating original images from to with a uniform distribution. The translated dataset (T) is generated by randomly shifting an image by a proportion of .

All the models follow the architecture of AlexNet. For SI-CNN, the first, second and fifth convolutional layers are replaced by rotation-invariant layers using six angles from to

. For ST-CNN, the input is fed into a spatial transformer network before the AlexNet. The spatial transformer network uses bilinear sampling with an affine transformation function. As in

[11], the size of the spatial transformer network is about half the size of AlexNet. For our PR-CNN, the feature maps are divided into blocks at level , and the patch reorder module is applied to the fifth convolutional layer.

To train SI-CNN and PR-CNN, we use a base learning rate of and decay it by every iterations. Both networks are trained for iterations. We use a momentum of , a weight decay of , and a weight clip of . The convolutional kernel weights and bias are initialized by and , respectively. The weights and bias of fully connected layers are initialized by and . The bias learning rate is set to be   the learning rate for the weights. For ST-CNN, since it does not converge under the aforementioned setting, we fine-tune the network with the classification network initialized by the pre-trained AlexNet. The spatial transformer module consists of convolutional layers, pooling layers, and fully connected layers. The first convolutional layer filters the input with kernels of size with a stride of pixels, then is connected by a pooling layer with stride . The second convolutional layer has kernels of size with a stride of pixels, followed by a max pooling layer with stride . The output of the pooling layer is fed into two fully connected layers with neurons. Finally, the third fully connected layer maps the output into affine parameters. Then the 6-dimensional output is fed into the spatial transformer layer to get the transformed input image. During the fine-tuning, the learning rate of the spatial transformer is set to be that of the classification network. We use a base learning rate of and decay it by every iterations, the training process converges after approximately iterations.

The results are presented in Table 2. It shows that data augmentation, feature map augmentation, transform pre-processing and patch reordering are all effective ways to improve the rotation or translation invariance of CNNs. Our PR-CNN not only achieves more consistent representation faced with location changes in input but also relieves the models from encoding invariance. It improves the classification accuracy of the model even for the original test images.


Method FC6 FC7
CNN 3.381 3.438
CNN-Data-Aug 3.340 3.441
SI-CNN 3.431 3.452
ST-CNN 3.430 3.446
PR-CNN(ours) 3.574 3.539
Table 3: Performance of CNN models on the UK-Bench retrieval dataset. Here, we use the -dimensional feature of fc6 and fc7 for evaluation.

We also evaluate our PR-CNN model on the popular image retrieval benchmark dataset UK-Bench [21]. This dataset includes groups of images, each containing relevant samples concerning a certain object or scene from different viewpoints. Each of the in total images is used as one query to perform image retrieval, targeting at finding each image’s

counterparts. We choose UK-Bench since the viewpoint variation in the dataset is very common. Although many of the variation types are beyond the three types of geometry transformations that we attempt to address with, we demonstrate the effectiveness of PR-CNN for solving many severe rotation, translation and scale variance cases in image retrieval task.

We directly apply the models trained on ImageNet-2012 for evaluation. The outputs of the fc6 and fc7 layers are used as the feature for each image. Then, we compute the root value of each dimension and perform L2 normalization. To perform image retrieval on UK-Bench, the Euclidean distances of the query image with respect to all database images are computed and sorted. Images with the smallest distances are returned as top ranked images. NS-Score (average top four accuracy) is used to evaluate the performance, and a score of indicates that all the relevant images are successfully retrieved in the top-four results.

As shown in Table 3, data augmentation, feature map augmentation or spatial transformer network does not present considerable capacity of transform invariance when applied to an unrelated new task. Maybe these models need to be well fine-tuned when transferred to a new dataset and the Spatial Transformer block is content and task dependent. Patch reordering is better for transferring by encoding invariance only into architecture, which is irrelevant to the content of input. It demonstrates that our PR-CNN model can be seamlessly transferred to other image recognition based applications (e.g. image retrieval) without any re-training/fine-tuning. Meanwhile, for other models, fc7 presents better invariance than fc6. However, for our PR-CNN, fc6 is better. We can find some clues from Fig. 2, that is, fc6 presents less parameter redundancy than fc7 in PR-CNN.

Measuring Invariance

Figure 4: Transform invariance measure (the larger, the better). By applying patch reordering on feature maps during training, the invariance of the following layers is significantly improved.

We evaluate the transform invariance achieved by our model using the invariance measure proposed in [9]. In this approach, a neuron is considered to be firing when its response is above a certain threshold . Each is chosen to satisfy the condition that is greater than , where is the number of inputs. Then, the local firing rate is computed as the proportion of transformed inputs to which a neuron fires. To ensure that a neuron is selective and with a high local firing rate (invariance to the set of the transformed inputs), the invariance score of a neuron is computed based on the ratio of its invariance to selectivity, i.e., . We report the average of the top highest scoring neurons (), as in [13]. Please refer to [9] for more details.

Here, we build the transformed dataset by applying rotation ( with a step size of ) and translation ( with a step size of ) on the validation images of ImageNet-2012. Fig. 4 shows the invariance score of CNN and PR-CNN measured at the end of each layer. We can see that by applying patch reordering to feature maps during training, the invariance of the subsequent layers is significantly improved.

(a) Performance of PRCNN for rotated images.
(b) Performance of PRCNN for translated images.
Figure 5: Performance of PRCNN when the PR module is applied to different layers with different levels. The performance drops significantly when we perform patch reordering in low layers. This is because patch reordering breaks local spatial correlations among patches, which is vital for recognizing meaningful visual patterns in lower layers.

Effect of Patch Reordering on Image Representations

To investigate the effect of applying patch reordering on the representations of transformed images, we show the output feature maps of Conv5 in Alexnet in Fig. 6. We can see that with patch reordering, the feature map is much more consistent than original CNN when faced with global rotations and translations.

(a) Input images.
(b) Output feature maps of Conv5 in AlexNet before patch reordering.
(c) Output feature maps of Conv5 in AlexNet after patch reordering.
Figure 6: The feature maps of conv5 for the first image in the ImageNet 2012 validation set. We can see that with patch reordering, the feature map is much more consistent than original CNN when faced with global rotations and translations.

Effect of Patch Reordering on Different Layers

To investigate the effect of applying patch reordering to different convolutional layers and the effect of pyramid levels, we train different PR-CNN models with patch reordering applied to convolutional layers with level or . When , we divide the feature maps into blocks. For , the feature maps are first divided into blocks, and each block is further divided into sub-blocks. The experimental results are presented in Fig. 5. We can see that the performance drops significantly when we perform patch reordering in low layers. Meanwhile, multi-level reordering does not result in a significant difference to single- level reordering in regard to higher convolutional layers. Low-level features, such as edges and corners, are detected in low layers, and they must be combined in a local spatial range to conduct further recognition. Because patch reordering breaks this local spatial correlation and treats each block as an independent feature, the generated representation becomes less meaningful. This explanation can also clarify the phenomenon that the multi-level division of feature maps significantly improves model performance in lower layers because a hierarchical reordering will preserve more local spatial relationships than will a single one.


In this paper, we introduce a very simple and effective way to improve the rotation and translation invariance of CNN models. By reordering the feature maps of CNN layers, the model is relieved from encoding location invariance into its parameters. Meanwhile, CNN models are able to generate more consistent representations when faced with location changes of local patterns in input. Our architecture does not need any extra parameters or pre-processing on input images. Experiments show that our model outperforms CNN models in both image recognition and image retrieval tasks.

Acknowledgments This work is supported by NSFC under the contracts No.61572451 and No.61390514, the 973 project under the contract No.2015CB351803, the Youth Innovation Promotion Association CAS CX2100060016, Fok Ying Tung Education Foundation WF2100060004, the Fundamental Research Funds for the Central Universities WK2100060011, Australian Research Council Projects: FT-130101457, DP-140102164, and LE140100061.


  • [1] J. Bruna and S. Mallat (2013) Invariant scattering convolution networks. IEEE Trans. Pattern Anal. Mach. Intell. 35, pp. 1872–1886. Cited by: Related Work.
  • [2] K. Chatfield, V. Lempitsky, A. Vedaldi, and A. Zisserman (2011) The devil is in the details: an evaluation of recent feature encoding methods. In Proceedings of the British Machine Vision Conference, pp. 76.1–76.12. Cited by: Patch Reordering.
  • [3] A. Coates and A. Y. Ng (2011) The importance of encoding versus training with sparse coding and vector quantization.. In ICML, pp. 921–928. Cited by: Patch Reordering.
  • [4] T. S. Cohen and M. Welling (2015) Transformation properties of learned visual representations. ICLR. Cited by: Related Work.
  • [5] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell (2015) Long-term recurrent convolutional networks for visual recognition and description. CVPR. Cited by: Introduction.
  • [6] H. Fang, S. Gupta, F. Iandola, R. Srivastava, L. Deng, P. Dollar, J. Gao, X. He, M. Mitchell, J. C. Platt, L. Zitnick, and G. Zweig (2015) From captions to visual concepts and back. CVPR. Cited by: Introduction.
  • [7] R. Gens and P. M. Domingos (2014) Deep symmetry networks. NIPS. Cited by: Related Work.
  • [8] G. Gkioxari, R. Girshick, and J. Malik (2015) Contextual action recognition with r*cnn. ICCV. Cited by: Introduction.
  • [9] I. J. Goodfellow, Q. V. Le, A. M. Saxe, H. Lee, and A. Y. Ng (2009) Measuring invariances in deep networks. NIPS. Cited by: Measuring Invariance.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. CVPR. Cited by: Introduction.
  • [11] M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu (2015) Spatial transformer networks. NIPS. Cited by: Related Work, MNIST, MNIST, ImageNet-2012, Experiments.
  • [12] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell (2014) Caffe: convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093. Cited by: Experiments.
  • [13] A. Kanazawa and A. Sharma (2014) Locally scale-invariant convolutional neural networks. NIPS. Cited by: Related Work, Measuring Invariance, Experiments.
  • [14] A. Karpathy and F. Li (2015) Deep visual-semantic alignments for generating image description. CVPR. Cited by: Introduction.
  • [15] J. J. Kivinen and C. K. I. Williams (2011) Transformation equivariant boltzmann machines. In

    Artificial Neural Networks and Machine Learning

    pp. 1–9. Cited by: Related Work.
  • [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) ImageNet classification with deep convolutional neural networks. NIPS, pp. 1097–1105. Cited by: Parameter Redundancy in Convolutional Neural Networks, Experiments.
  • [17] Q. V. Le, J. Ngiam, Z. Chen, D. J. hao Chia, P. W. Koh, and A. Y. Ng (2010) Tiled convolutional neural networks. NIPS. Cited by: Related Work.
  • [18] Y. Lecun, L. on Bottou, Y. Bengio, and P. Haffner (1998) Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pp. 2278–2324. Cited by: Introduction.
  • [19] K. Lenc and A. Vedaldi (2015) Understanding image representations by measuring their equivariance and equivalence. CVPR. Cited by: Related Work.
  • [20] J. Long, E. Shelhamer, and T. Darrell (2015) Fully convolutional networks for semantic segmentation. CVPR. Cited by: Introduction.
  • [21] D. Nister and H. Stewenius (2006) Scalable recognition with a vocabulary tree. In CVPR, pp. 2161–2168. Cited by: UK-Bench, Experiments.
  • [22] Y. Pan, T. Mei, T. Yao, H. Li, and Y. Rui (2016) Joint modeling embedding and translation to bridge video and language. CVPR. Cited by: Introduction.
  • [23] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556. Cited by: Introduction.
  • [24] K. Simonyan and A. Zisserman (2014) Two-stream convolutional networks for action recognition in videos. NIPS. Cited by: Introduction.
  • [25] K. Sohn and H. Lee (2012) Learning invariant representations with local transformations. In ICML, Cited by: Related Work, Related Work.
  • [26] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolution. CVPR. Cited by: Introduction.
  • [27] L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville (2015) Describing videos by exploiting temporal structure. ICCV. Cited by: Introduction.