A Closed-Form Learned Pooling for Deep Classification Networks

06/10/2019 ∙ by Vighnesh Birodkar, et al. ∙ 0

In modern computer vision tasks, convolutional neural networks (CNNs) are indispensable for image classification tasks due to their efficiency and effectiveness. Part of their superiority compared to other architectures, comes from the fact that a single, local filter is shared across the entire image. However, there are scenarios where we may need to treat spatial locations in non-uniform manner. We see this in nature when considering how humans have evolved foveation to process different areas in their field of vision with varying levels of detail. In this paper we propose a way to enable CNNs to learn different pooling weights for each pixel location. We do so by introducing an extended definition of a pooling operator. This operator can learn a strict super-set of what can be learned by average pooling or convolutions. It has the benefit of being shared across feature maps and can be encouraged to be local or diffuse depending on the data. We show that for fixed network weights, our pooling operator can be computed in closed-form by spectral decomposition of matrices associated with class separability. Through experiments, we show that this operator benefits generalization for ResNets and CNNs on the CIFAR-10, CIFAR-100 and SVHN datasets and improves robustness to geometric corruptions and perturbations on the CIFAR-10-C and CIFAR-10-P test sets.

READ FULL TEXT VIEW PDF

Authors

page 3

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Convolutional Neural Networks (CNNs) have revolutionized the field of computer vision (Krizhevsky et al., 2012a; He et al., 2016b, 2017). Their success (compared to fully connected networks) is often attributed to their weight sharing in form of a convolution, which reduces the number of learnable parameters (Krizhevsky et al., 2012b). In addition, the “shift invariance” property of convolution has been believed to be crucial for improved generalization in vision tasks (Fukushima and Miyake, 1982) (although some modifications may be required (Azulay and Weiss, 2018; Zhang, 2019)). Shift invariance, while crucial for handling translation in images, is a very limited form of real-world geometric transformations. For instance, convolutional representations are not invariant or equivariant to other basic transforms such as image rotation and scaling (Azulay and Weiss, 2018).

There have been recent attempts to incorporate additional forms of invariances, such as rotation, reflection, and scaling. (Sifre and Mallat, 2013; Bruna and Mallat, 2013; Esteves et al., 2018a; Kanazawa et al., 2014; Worrall et al., 2017). However, these methods engineer the invariance into networks; requiring a-priori identification of invariance types of interest. In this work, our goal is to achieve invariance using a data-driven approach to handle geometric transforms that may not fall into the categories mentioned above.

One way to achieve invariance or equivariance for transforms beyond translation is non-uniform sub-sampling of the image (to be fed as input to a convolutional layer). For example, log-polar sampling of an image results in a new image, where rotation and scaling in the original image become equivalent to a translation in the resulted image (Esteves et al., 2018b). This suggests that by adapting the pooling operator, one can build representations that are better suited for variety of geometric transforms.

Non-uniform sampling is also the chosen scheme by nature; foveal vision implements a spatially-varying sampling similar to log-polar transform (Larson and Loschky, 2009). Central and peripheral regions are sampled at different frequencies and both contribute to efficient and effective human vision. In addition, it is known that non-uniform sampling of image can facilitate image registration when geometric transforms are beyond translation (Mobahi et al., 2012). Our results with learned pooling operator confirms the advantage of spatially varying pooling. For example, Figure 0(a) shows the response map of a learned pooling operator on SVHN dataset. The operator places more weights on the center pixel to take advantage of the fact that SVHN digits are mostly in the center. Contrast this with the operator learned for a CIFAR-100 model which places weights all across the spatial field (see Supplementary). The form of the learned pooling operator also affects the pooled feature maps. In Figure 0(b), the pooled feature maps are more clustered around the mean feature map of each class, compared to the feature maps produced by a regular CNN. This results in better separability of classes and better generalization as seen in Table 1.

In addition to adapting to the geometric transforms present in the data, and hence improving generalization, our learned pooling operator helps with robustness of the model. It has been observed that small geometric transforms in the image can result in prediction errors in existing deep models, which can be traced to the pooling operator (Azulay and Weiss, 2018; Zhang, 2019)

; max pooling results in aliasing effects in the representation. While average pooling can prevent the aliasing issues (because it acts as a low-pass filter), the blurring causes loss of information and hence inferior classification performance to max pooling. However, by adapting the pooling operator to the data, in way that it can provide more class separability, relevant information are automatically picked up. In fact, our experiments show that the learned pooling can outperform the naive uniform down-sampling scheme that is used in most state-of-the-art models (strided pooling) and yet is robust to geometric perturbations on robustness benchmark datasets (CIFAR-10-C and CIFAR-10-P

(Hendrycks and Dietterich, 2019)).

[width=1]figs/pooling_viz

(a) Visualizing the pooling operator at 9 locations

[width=1 ]figs/random_viz

(b) Comparing the outputs of pooling operations.
Figure 1: One the left we visualize the learned pooling map for a CNN model on the reduced SVHN dataset. Each heat map corresponds to one of the 9 evenly spaced locations in the output feature map. For each pixel in the output, we can see where the algorithm chooses to put a positive(red) or negative(blue) weight over the input. On the right, we compare the output of the default pooling used (strided convolution) against the output of our pooling operator. We randomly select a channel and display the average value of that channel per class along with 5 other values closest to the average. We note that with our pooling operator, the per-class feature maps are closer to the per-class average feature maps as compared to the regular CNN model.

2 Related Work

Convolutional Neural Networks (CNNs) rely on pooling or sub-sampling to reduce the size of the hidden representation. This is known to have important implications towards the kinds of invariances and generalization abilities of a network

(Cohen and Shashua, 2016). Earlier architectures have relied on average-pooling (LeCun et al., 1998) and max-pooling (Krizhevsky et al., 2012a), whereas modern ones learn parameters of pooling through strided convolutions (He et al., 2016a).

The history of pooling in computer vision however goes past CNNs popularity. For instance, (Boureau et al., 2011) combines SIFT with pooling separately over learned clusters of features. (Malinowski and Fritz, 2013) learns pooling parameters of all spatial lower-level features through a fully connected network, whereas (Gong et al., 2014) learns pooling separately at each scale using VLAD (Jégou et al., 2010). (Girshick et al., 2015) defines distance transform pooling as part of deformable part models using a quadratic function of distance from the center and use a latent SVM (Felzenszwalb et al., 2009) to learn it on top of a pre-trained CNN. (Li et al., 2015) uses pooling at different scales in spatial dimensions and also performs pooling on the color channels while aggregating using the max operator.

Pooling has also been used to aggregate varying input sizes into a fixed size representation. (Passalis and Tefas, 2017)

uses a fixed number of RBF neurons on top of a regular CNN to output a fixed size representation irrespective of the input image size.

(Zhou et al., 2017) uses a specially designed pooling function for a multi-instance learning setting to output tags for a video from tags predicted for each frame. (Miech et al., 2017) uses NetVLAD defined in (Arandjelovic et al., 2016)

and approximations of Bag of Words and Fischer vector encoding to aggregate features across time for learning video classification.

Recently, there have been some attempts to to learn parameters of local pooling operations of CNNs in an end-to-end fashion though gradient descent. (Sun et al., 2017) propose learning a local pooling operator, one per each channel and train a deep-neural network for classification. (Saeedan et al., 2018) try to preserve small details in the input while pooling and introduce two new parameters per input feature map to control which details are preserved. (Lee et al., 2016) experiment with learned and fixed combinations of average/max pooling and also suggest organizing the outputs of multiple local filters in the form of a binary tree to learn the parameters of mixing them. Although these approaches learn pooling parameters from data, the pooling operator is limited to spatially uniform; the same sampling scheme is used to pool each output pixel. As we discuss in Section 1, spatially varying pooling is necessary to learn efficient and robust representation for transformations other than translation.

3 Method

3.1 Notation

We first formalize the definition of linear pooling. Let . Given a spatial domain and set of intensity values , a feature map of depth is a map . We can represent a feature map in matrix form:

(1)
(2)
(3)

where denotes the domain size and converts a matrix into a column vector by concatenating the columns of the matrix.

We define linear pooling as the operator which maps into another feature map where . That is, the operator may shrink the input spatially but maintains the number of channels. Formally, the output of the linear operator is an element in , where and . Note that the operator is applied to each channel (column) of the input matrix to generate the corresponding channel (column) in the output matrix.

(4)

3.2 Formulation

Obviously, average pooling is seen as a special case for the linear operator when the entries of the operator are set in a specific way. However, one may wonder if, within the space of all linear operators, there could be better choices than average pooling. Trivially, the answer is data dependent and hence we let the data itself discover the operator that suits the task in hand. In the classification setting, a good operator should help with classification. One possible way to quantify helping with classification is to improve separability of the classes, as explained below.

To simplify the formulation, we focus on finding each row of separately. Let be the ’th row of , arranged as a column vector. Similarly, let be the ’th row of , also arranged as a column vector. Then the pooling identity in (4) can be equivalently expressed using vectors as:

(5)

To reduce mathematical clutter, we drop the index from and . The reader should remember that the following result needs to be applied for each choice of separately. Hence with abuse of notation we proceed as:

(6)

To define separability, we require a training set. Consider a set of feature maps , whose elements are associated with a label , with being the number of classes in the dataset. We define the following total and per class average quantities:

(7)

Inspired by Linear Discriminant Analysis (LDA), we quantify separability of the classification as the ratio of between-class scatter and within-class scatter .

(8)

To achieve a good representation for classification, we aim to improve separability of the data points by maximizing the ratio:

(9)

Plugging definitions from (7) and (6) into the above objective function yields:

(10)

where are defined based on ’s in a similar way done for in (7). For brevity, define:

(11)
(12)

This way our goal is to maximize separability:

(13)

3.3 Closed-Form Solution

The solution to (13) is ill-posed; if some is a solution, then so is for any . To avoid such freedom of scale, we anchor to a fixed value, e.g. and then solve:

(14)

In addition, we wish to keep localized so that the operator respects the topology of the space . This is important when the pooled feature maps are to be processed by convolution operators (in the future layers). We can encourage localization by introducing a penalty term of the form , where is a diagonal matrix with positive components. How the elements of are chosen is described in Section 3.4.

Applying localization penalty and anchoring of results in the following optimization:

(15)

where is the penalty coefficient. It can be shown (see the supplementary appendix for proof) that the solution must satisfy the following

generalized eigenvalue problem

for some111It turns out is proportional to and hence still serves as some penalty coefficient. See the supplementary appendix for details. .

(16)

One way to solve the generalized eigenvalue problem is by matrix inversion. If the matrix on the r.h.s. is invertible, then we have:

(17)

which implies that the optimal must be an (in fact the leading) eigevector of the following matrix:

(18)

Since the matrix is diagonal with positive entries, and the matrix is positive semi-definite, the matrix has a regularization effect when computing the inverse of . Thus we refer to as a regularization matrix.

3.4 Regularization Matrix

We now explain how the components of are chosen. For clarity, we temporarily (throughout this subsection) switch from the brief notation to the full notation . We also need to switch from to accordingly. Note that each component of corresponds to a coordinate . To show this relationship we use the notation . Similarly, for the space , each index is associated with a coordinate , and the relationship is shown via . We penalize the ’th component of (recall each is a vector of size , thus ) by its coordinate distance from that of . Since in general , a scale correction needs to be done. This way, the amount of penalty for the ’th component of , which is encoded in the diagonal element , is set to , where is the scale factor . Here refers to the ’th component of the matrix .

In words, this penalty scheme means that if a point in the source feature maps contributes to a point in the destination feature map, where the latter is far from the source point, then that contribution is penalized.

3.5 Algorithm

The resulted procedure is shown in Algorithm 1. Note that the matrices and are the same for any . From the beginning of the algorithm up to line 16 is to compute these matrices. However, the matrix and the vector both depend on and thus Line 20 to the end loops over to compute each and its resulted .

1:  Input: Training pairs where and with being number of classes, penalty coefficient , scaling factor .
2:  
3:  for  to  do
4:     
5:     for  to  do
6:        if  then
7:           
8:           
9:        end if
10:     end for
11:     
12:     
13:     
14:     
15:  end for
16:  
17:  
18:  
19:  
20:  for  to  do
21:     
22:     
23:     
24:  end for
25:  return  
Algorithm 1 Learning Pooling Operator.

3.6 Implementation Details

Choice of Norm. The last line of algorithm returns . We will explore and norms in the experiments.

Normalization of Feature Maps. The pooling operator is shared across all channels. However, the intensity values in each channel could potentially have a different center and scale, making it hard for the same pooling to provide similar effect on all channels. To fix this, we normalize feature maps before forming the matrices and applying Algorithm 1. More precisely, for a given feature map (, with being size of the training set), the normalized feature map is defined as , where and . After the pooling operator is applied, we transform the feature map back to its original scale and center by multiplying by and adding . This helps the output of the pooling operation be consistent with rest of the network.

Use in Deep Networks Consider a trained deep network using some typical pooling operator. We can convert the pooling operator at any given layer to a learned one, by treating the hidden representation at that layer as the input feature maps to Algorithm 1 (after applying the channel normalization described above). We will then adapt the network weights for the learned pooling by retraining the network. This process can be repeated for multiple layers. In our experiments, however, we observe that sometimes learned pooling even at one layer can already give a boost in test accuracy.

Number of Eigenvectors. For simplicity of presentation, Algorithm 1

uses the top eigenvector. In principle, however, top few eigenvectors could be used instead. In fact, modern architectures often double the number of output channels while down-sampling via strided convolutions. To imitate that, we chose to select the top two eigenvectors from (

18), which results in two feature maps per input feature map. This keeps the size of the hidden representation after pooling consistent between our method and the common practice.

Computing the Generalized Eigenvalue. For simpler exposition, in Section 3.3 and also Algorithm 1 we have used matrix inversion to solve the generalized eigenvalue problem. However, there are more efficient approaches for solving generalized eigenvalue problem without matrix inversion. In addition, we only need the top-1 or top-2 eigenvectors, which allows further efficiency in computation. There are numerical recipes that can leverage these two properties, such as scipy.sparse.linalg.eigs that we used for our Python implementation

Learning Pooling by SGD. One may wonder why not using gradient descent to optimize a total loss (sum of the usual cross-entropy and separability criterion), instead of Algorithm 1 and thus simultaneously learn network weights and pooling operator. The answer is that it is either impractical or leads to inferior performance. To learn the regularized pooling operator it is necessary to store the matrix for each location in the output feature map. This would incur a memory cost of and would be extremely space inefficient. The performance with an un-regularized pooling map is reported in Table 4 in the appendix. In performs worse than our approach in almost all cases and in some, even worse than the baseline.

4 Experiments

We study the performance of our pooling operator on the SVHN (Netzer et al., 2011) and CIFAR-10/CIFAR-100 (Krizhevsky and Hinton, 2009) datasets. For the SVHN dataset we also experiment with a reduced 5% subset to measure the performance of our algorithm in presence of limited labelled data. We use 2 models for our experiments, a CNN model which is a 4-layer CovnNet and a 18 layer ResNet (He et al., 2016a) model. Both these models have 3 pooling layers in which they down sample via strided convolutions. 222Additional details about the models, datasets and training can be found in the supplementary material, along with the source code. The implementation will be open-sourced with the camera ready version.

4.1 Effect on generalization

Table 1 shows the effect of our pooling operator on generalization. We are able to improve on the ResNet model in all settings and with the CNN model on both versions on the SVHN dataset. The largest gain is observed with the CNN model on the reduced SVHN dataset of over . Even when the CNN model fails to improve on the CIFAR datasets, the performance is on par with the CNN.

Model Dataset Baseline Error Pooling Layer Norm Error with
replaced pooling
CNN Reduced SVHN 3 1
SVHN 2 25 2
CIFAR-10 3
CIFAR-100 3 5 2
ResNet Reduced SVHN 2 1
SVHN 2
CIFAR-10 3, 1
CIFAR-100 2
Table 1:

Effect of replacing the pooling operator on generalization. We report the mean test error and standard deviation after averaging over 5 trials. When multiple pooling layers are replaced, it is indicated by separating the hyper-parameters by a comma. Experiments which result in improvements are highlighted in bold.

4.2 Robustness to corruptions and perturbations

(Hendrycks and Dietterich, 2019) have developed a dataset of real-world corruptions to test model robustness. For these set of experiments, the model is trained on the original CIFAR-10 training set and evaluated on the modified test sets provided. We use the given CIFAR-10-C and CIFAR-10-P test sets and evaluate our approach by measuring the suggested quantities. For all of these measurements, we use the original ResNet architecture as the baseline.

In Table 2 we measure Corruption Error on the CIFAR-10-C dataset as suggested by (Hendrycks and Dietterich, 2019) . In the bottom most row we report the average corruption for each corruption type. We note that among others, the model with the replaced pooling operator is more robust in presence of geometric transformations for 4 out of 5 cases. We define geometric transformations as those which can move/displace pixels.

In Table 3

, we measure how our algorithm responds to gradually applied perturbations with the CIFAR-10-P dataset. Each cell reports the Flip Probability, which indicates the probability of the predicted label changing in presence of a perturbation. The bottom row reports the Flip Rate which is the ratio of the flip probability of our model over the flip probability of the original ResNet model. We note that our model does better than the original network for 10 out of 14 perturbations and for 6 out 8 geometric perturbations.

Corruption
Model Severity Geometric Non-Geometric
Defocus. Frost. Motion. Zoom. Elastic Gauss. Shot. Impulse. Snow Frost Fog Bright. Contr. Pixel. Jpeg.
ResNet
ResNet
with
pooling
replaced
CE
Table 2: Measuring robustness to corruptions as defined by (Hendrycks and Dietterich, 2019) on the CIFAR-10-C test set. Each cell lists the average test error (percentage) over 5 models for a particular corruption and severity. We use our CIFAR-10 model as described in Table 1. In the last row, we report the average corruption error (CE) across all 5 severities. In this metric, lower is better and the vanilla ResNet itself would score . We highlight corruptions for which we do better in bold.
Corruption
Model Severity Geometric Non-Geometric
Scale Rot. Tilt Tran. Shear Motion. Zoom. Ga. B Bright. Spatter Snow Shot. Speckle Ga. N
ResNet
ResNet
with
pooling replaced
Flip Rate
Table 3: Measuring robustness to perturbations as defined by (Hendrycks and Dietterich, 2019) on the CIFAR-10-P dataset by using our best model from Table 1. In each cell we report Flip Probability (FP) averaged over 5 models. In the last row we report the Flip Rate when using default ResNet model as a baseline. When there are multiple severities we report the average. Flip probability and rate are reported out of 100, with lower being better. Improvements are highlighted in bold.

5 Conclusion

We propose a more general pooling operator than currently being used in literature. We also present an algorithm to learn the pooling operator in closed form given the distribution of its inputs. Compared to pooling operations that are shared throughout spatial dimensions, ours allows more flexibility by being spatially varying. We replace the standard pooling operations in a CNN and a ResNet model and see benefits in generalizations on the CIFAR-10/CIFAR-100 and SVHN datasets. The operator is demonstrably more robust to unseen geometric transformations, which we show by evaluating on the CIFAR-10-C and CIFAR-10-P test sets.

References

  • Arandjelovic et al. (2016) Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., and Sivic, J. (2016). Netvlad: Cnn architecture for weakly supervised place recognition. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 5297–5307.
  • Azulay and Weiss (2018) Azulay, A. and Weiss, Y. (2018). Why do deep convolutional networks generalize so poorly to small image transformations? arXiv preprint arXiv:1805.12177.
  • Boureau et al. (2011) Boureau, Y.-L., Le Roux, N., Bach, F., Ponce, J., and LeCun, Y. (2011). Ask the locals: multi-way local pooling for image recognition. In ICCV’11-The 13th International Conference on Computer Vision.
  • Bruna and Mallat (2013) Bruna, J. and Mallat, S. (2013). Invariant scattering convolution networks. IEEE transactions on pattern analysis and machine intelligence, 35(8):1872–1886.
  • Cohen and Shashua (2016) Cohen, N. and Shashua, A. (2016). Inductive bias of deep convolutional networks through pooling geometry. arXiv preprint arXiv:1605.06743.
  • Esteves et al. (2018a) Esteves, C., Allen-Blanchette, C., Zhou, X., and Daniilidis, K. (2018a).

    Polar transformer networks.

    In International Conference on Learning Representations.
  • Esteves et al. (2018b) Esteves, C., Allen-Blanchette, C., Zhou, X., and Daniilidis, K. (2018b). Polar transformer networks. In International Conference on Learning Representations.
  • Felzenszwalb et al. (2009) Felzenszwalb, P. F., Girshick, R. B., McAllester, D., and Ramanan, D. (2009). Object detection with discriminatively trained part-based models. IEEE transactions on pattern analysis and machine intelligence, 32(9):1627–1645.
  • Fukushima and Miyake (1982) Fukushima, K. and Miyake, S. (1982). Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition. In Competition and cooperation in neural nets, pages 267–285. Springer.
  • Girshick et al. (2015) Girshick, R., Iandola, F., Darrell, T., and Malik, J. (2015). Deformable part models are convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 437–446.
  • Gong et al. (2014) Gong, Y., Wang, L., Guo, R., and Lazebnik, S. (2014). Multi-scale orderless pooling of deep convolutional activation features. In European conference on computer vision, pages 392–407. Springer.
  • He et al. (2017) He, K., Gkioxari, G., Dollár, P., and Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969.
  • He et al. (2016a) He, K., Zhang, X., Ren, S., and Sun, J. (2016a). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778.
  • He et al. (2016b) He, K., Zhang, X., Ren, S., and Sun, J. (2016b). Identity mappings in deep residual networks. In European conference on computer vision, pages 630–645. Springer.
  • Hendrycks and Dietterich (2019) Hendrycks, D. and Dietterich, T. (2019). Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261.
  • Ioffe and Szegedy (2015) Ioffe, S. and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.
  • Jégou et al. (2010) Jégou, H., Douze, M., Schmid, C., and Pérez, P. (2010). Aggregating local descriptors into a compact image representation. In CVPR 2010-23rd IEEE Conference on Computer Vision & Pattern Recognition, pages 3304–3311. IEEE Computer Society.
  • Kanazawa et al. (2014) Kanazawa, A., Sharma, A., and Jacobs, D. W. (2014). Locally scale-invariant convolutional neural networks. CoRR, abs/1412.5104.
  • Kingma and Ba (2014) Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • Krizhevsky and Hinton (2009) Krizhevsky, A. and Hinton, G. (2009). Learning multiple layers of features from tiny images. Technical report, Citeseer.
  • Krizhevsky et al. (2012a) Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012a). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105.
  • Krizhevsky et al. (2012b) Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012b). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105.
  • Larson and Loschky (2009) Larson, A. M. and Loschky, L. C. (2009). The contributions of central versus peripheral vision to scene gist recognition. Journal of Vision, 9(10):6–6.
  • LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324.
  • Lee et al. (2016) Lee, C.-Y., Gallagher, P. W., and Tu, Z. (2016). Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In Artificial Intelligence and Statistics, pages 464–472.
  • Lee et al. (2014) Lee, C.-Y., Xie, S., Gallagher, P., Zhang, Z., and Tu, Z. (2014). Deeply-supervised nets. arXiv preprint arXiv:1409.5185.
  • Li et al. (2015) Li, C., Reiter, A., and Hager, G. D. (2015). Beyond spatial pooling: fine-grained representation learning in multiple domains. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4913–4922.
  • Liu (2018) Liu, K. (2018). pytorch-cifar. https://github.com/kuangliu/pytorch-cifar.
  • Malinowski and Fritz (2013) Malinowski, M. and Fritz, M. (2013). Learnable pooling regions for image classification. arXiv preprint arXiv:1301.3516.
  • Miech et al. (2017) Miech, A., Laptev, I., and Sivic, J. (2017). Learnable pooling with context gating for video classification. arXiv preprint arXiv:1706.06905.
  • Mobahi et al. (2012) Mobahi, H., Zitnick, C. L., and Ma, Y. (2012). Seeing through the blur. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 1736–1743. IEEE.
  • Netzer et al. (2011) Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. (2011). Reading digits in natural images with unsupervised feature learning.
  • Passalis and Tefas (2017) Passalis, N. and Tefas, A. (2017). Learning bag-of-features pooling for deep convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 5755–5763.
  • Saeedan et al. (2018) Saeedan, F., Weber, N., Goesele, M., and Roth, S. (2018). Detail-preserving pooling in deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9108–9116.
  • Sifre and Mallat (2013) Sifre, L. and Mallat, S. (2013). Rotation, scaling and deformation invariant scattering for texture discrimination. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1233–1240.
  • Sun et al. (2017) Sun, M., Song, Z., Jiang, X., Pan, J., and Pang, Y. (2017). Learning pooling for convolutional neural network. Neurocomputing, 224:96–104.
  • Worrall et al. (2017) Worrall, D. E., Garbin, S. J., Turmukhambetov, D., and Brostow, G. J. (2017). Harmonic networks: Deep translation and rotation equivariance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5028–5037.
  • Zhang (2019) Zhang, R. (2019). Making convolutional networks shift-invariant again. arXiv preprint arXiv:1904.11486.
  • Zhou et al. (2017) Zhou, Y., Sun, X., Liu, D., Zha, Z., and Zeng, W. (2017). Adaptive pooling in multi-instance learning for web video annotation. In Proceedings of the IEEE International Conference on Computer Vision, pages 318–327.

6 Supplementary Appendix

6.1 Derivation of Closed Form

The goal is to solve the following optimization:

(19)

Using Lagrange multiplier , the optimization has the following Lagrangian:

(20)

The derivative of w.r.t. is:

(21)

It is not difficult to verify that . Hence, defining , we learn that (because .

(22)

6.2 Dataset details

In our descriptions, an epoch indicates the number of steps necessary to perform one full pass over the training data. Whenever reduced datasets are used, the number of steps in each epoch is scaled accordingly. To choose hyper-parameters we use cross validation performed by evaluating performance on 5 distinct random held-out subsets of the training data.

  • CIFAR-10/CIFAR-100 For both the CIFAR datasets, we normalize the images by subtracting the mean and dividing by the standard deviation over the entire training set. During training, images are augmented using the technique described in (Lee et al., 2014)

    which consists of padding images by 4 pixels and randomly cropping a

    piece along with adding horizontal flips. All models on the CIFAR datasets are trained for epochs with the learning for the ResNet model decayed at epochs and . For cross-validation we use 10 % subsets containing 5000 samples.

  • SVHN For the SVHN dataset each image is normalized separately with no additional data-augmentation applied. All models on this dataset are trained for epochs with the learning rate for the ResNet model decayed at epochs and . Cross validation is done by holding out 5 % of the training data.

  • Reduced SVHN We use this dataset to measure the performance of our algorithm in presence of less labelled samples. This is a reduced version of the SVHN dataset in which we only train with 5 % of the training data. For cross validation, 20 % of the reduced dataset is held out.

6.3 Model Details

We use our pooling operator within two models. These models perform spatial pooling by using stride 2 convolutions, and we experiment with replacing the 3 different layers in which they reduce spatial dimensions, with the exception of the final average pooling layer:

  • CNN This is a 4-layer ConvNet with convolution kernels of size . The first convolution uses

    channels and is followed by a ReLU non-linearity. The 3 convolution layers which follow are strided convolutions with a stride of 2 (to reduce spatial dimensions) and double the number of channels from the previous layer, each of them followed by batch-norm

    (Ioffe and Szegedy, 2015)

    and ReLU. Towards the end, the feature map is aggregated via global average pooling and fed into a linear layer which outputs logits. The CNN model is trained using the Adam optimizer

    (Kingma and Ba, 2014) with a fixed learning rate of .

  • ResNet The second architecture we use is a 18 layer ResNet described (He et al., 2016b) with its hyper parameters chosen form the implementation by (Liu, 2018). The network is trained with SGD and momentum coefficient of and a starting learning rate of , decayed a factor of after fixed number of epochs for each dataset.

6.4 Training Procedure

As the first step of our algorithm we train our models till convergence using the default pooling operation in each model. This is followed by using Algorithm 1 (in main paper) to compute our pooling operator along with the normalization parameters and

. While estimating matrices

and , for both models, we use at most samples per class. We then replace each pooling layer, one at a time, with our own pooling operator with various values of and choice of norm, and re-train the network from a random initialization. We choose the setting that leads to the best average cross-validation error. Using this setting, we train on the full training dataset and report numbers on the test set.

It is possible to use this procedure multiple times to replace more than one pooling layer. In our experiments, we tried replacing multiple pooling layers while using the CIFAR-10 and CIFAR-100 datasets. Only on CIFAR-10, replacing with 3 and the 1 pooling layer respectively in a ResNet led to a non-trivial reduction in cross-validation error. For all other models and datasets we report performance after replacing only a single pooling layer inside the model.

6.5 Additional visualizations

6.5.1 ResNet on reduced SVHN

[width=0.5]figs/viz_resnet

Figure 2: Visualization of best performing pooling map on ResNet mode with Reduced SVHN dataset.

6.5.2 ResNet on CIFAR-100

[width=0.5]figs/cifar_viz

Figure 3: Visualization of best performing pooling map on ResNet with the CIFAR-100 dataset.

6.6 Comparing operator learned through SGD

Model Dataset Baseline Error Pooling Layer Error with
SGD
CNN Reduced SVHN 3
SVHN 3
CIFAR-10 3
CIFAR-100 3
ResNet Reduced SVHN 3
SVHN 3
CIFAR-10 1
CIFAR-100 1
Table 4: Effect of learning a pooling map through SGD. To keep the results comparable with Table 1 we learned 2 distinct pooling maps which has the effect of doubling the number of input channels while down-sampling. We chose the pooling layer by using cross-validation as described in Section 6.4