Revisiting Spatial Invariance with Low-Rank Local Connectivity

02/07/2020 ∙ by Gamaleldin F. Elsayed, et al. ∙ 3

Convolutional neural networks are among the most successful architectures in deep learning. This success is at least partially attributable to the efficacy of spatial invariance as an inductive bias. Locally connected layers, which differ from convolutional layers in their lack of spatial invariance, usually perform poorly in practice. However, these observations still leave open the possibility that some degree of relaxation of spatial invariance may yield a better inductive bias than either convolution or local connectivity. To test this hypothesis, we design a method to relax the spatial invariance of a network layer in a controlled manner. In particular, we create a low-rank locally connected layer, where the filter bank applied at each position is constructed as a linear combination of basis set of filter banks. By varying the number of filter banks in the basis set, we can control the degree of departure from spatial invariance. In our experiments, we find that relaxing spatial invariance improves classification accuracy over both convolution and locally connected layers across MNIST, CIFAR-10, and CelebA datasets. These results suggest that spatial invariance in convolution layers may be overly restrictive.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 7

page 8

page 9

page 10

page 12

page 13

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction


Figure 1: Is spatial invariance a good inductive bias?

Convolutional architectures perform better than locally connected (or fully connected) architectures on computer vision problems. The primary distinction between convolutional and locally connected networks is requiring spatial invariance in the learned parameter set. Spatial invariance is imposed through weight sharing. One long-standing hypothesis (H1) is that this spatial invariance is a good inductive bias for images

(Ruderman and Bialek, 1994; Simoncelli and Olshausen, 2001; Olshausen and Field, 1996). H1 posits that predictive performance would systematically degrade as spatial invariance is relaxed. An alternative hypothesis (H2) suggests that spatial invariance is overly restrictive and some degree of variability would aid predictive performance. The degree to which H1 or H2 is a good hypothesis is largely untested across natural and curated academic datasets and the subject of this work.

Figure 2: Filters for each spatial location. Convolutional layers use the same filter bank for each spatial location (left). Locally connected layers learn a separate filter bank for each spatial location (right). By contrast, low-rank locally connected (LRLC) layers use a filter bank for each spatial location generated from combining a shared basis set of filter banks (middle). Both the basis set and the combining weights are learned end-to-end through optimization. The number of filter banks in the basis set (i.e., the rank parameter) thus determines the degree of relaxation of spatial invariance of the LRLC layer.

Convolutional neural networks (CNNs) are now the dominant approach across many computer vision tasks. Convolution layers possess two main properties that are believed to be key to their success: local receptive fields and spatially invariant filters. In this work, we seek to revisit the latter. Previous work comparing convolutional layers, which share filters across all spatial locations, with locally connected layers, which have no weight sharing, has found that convolution is advantageous on common datasets (LeCun, 1989; Bartunov et al., 2018; Novak et al., 2018). However, this observation leaves open the possibility that some departure from spatial invariance could outperform both convolution and local connectivity (Figure 1).

The structure of CNNs is often likened to the primate visual system (LeCun et al., 2015)

. However, the brain has no direct mechanism to share weights across space. Like feature maps in CNNs, visual areas are broadly organized according to neurons’ selectivity to retinal position, but in higher-level visual areas, position selectivity is weaker. Instead, high-level visual areas subdivide into stimulus-selective subregions 

(Hasson et al., 2002; Arcaro et al., 2009; Lafer-Sousa and Conway, 2013; Rajimehr et al., 2014; Srihasam et al., 2014; Saygin et al., 2016; Livingstone et al., 2017)

. This gradual progression from organization by position selectivity to organization by feature selectivity is more consistent with local connectivity than convolution.

Motivated by the lack of synaptic weight sharing in the brain, we hypothesized that neural networks could achieve greater performance by relaxing spatial invariance (Figure 1). Particularly at higher layers of the neural network, where receptive fields cover most or all of the image, applying the same weights at all locations may be a less efficient use of computation than applying different weights at different locations. However, evidence suggests that typical datasets are too small to constrain the parameters of a locally connected layer; functions expressible by convolutional layers are a subset of those expressible by locally connected layers, yet convolution typically achieves higher performance (LeCun, 1989; Bartunov et al., 2018; Novak et al., 2018).

To get intuition for why some relaxation of spatial invariance could be useful, consider images of natural scenes with ground and sky regions. It may be a bad idea to apply different local filters to different parts of the sky with similar appearance. However, it may also be overly limiting to apply the same filter bank to both the sky and the ground regions. Some degree of relaxation of the spatial invariance, such as a different sky and ground filters, may better suit this hypothetical data.

To test the hypothesis that spatial invariance is an overly restrictive inductive bias, we create a new tool that allows us to relax spatial invariance. We develop a low-rank locally connected (LRLC) layer that can parametrically adjust the degree of spatial invariance. This layer is one particular method to relax spatial invariance by reducing weight sharing. Rather than learning a single filter bank to apply at all positions, as in a convolutional layer, or different filter banks, as in a locally connected layer, the LRLC layer learns a set of filter banks, which are linearly combined using combining weights per spatial position (Figure 2).

In our experiments, we find that relaxing spatial invariance with the LRLC layer leads to better performance compared to both convolutional and locally connected layers across three datasets (MNIST, CIFAR-10, and CelebA). These results suggest that some level of relaxation of spatial invariance is a better inductive bias for image datasets compared to the spatial invariance enforced by convolution layers or lack of spatial invariance in locally connected layers.

2 Related Work

The idea of local connectivity in connectionist models predates the popularity of backpropagation and convolution. Inspired by the organization of visual cortex

(Hubel and Wiesel, 1963, 1968), several early neural network models consisted of one or more two-dimensional feature maps where neurons preferentially receive input from other neurons at nearby locations (Von der Malsburg, 1973; Fukushima, 1975). Breaking with biology, the Neocognitron (Fukushima, 1980) shared weights across spatial locations, resulting in spatial invariance. However, the Neocognitron was trained using a competitive learning algorithm rather than gradient descent. LeCun (1989) combined weight sharing with backpropagation, demonstrating considerable gains over locally connected networks (LCNs) on a digit recognition task.

Although the last decade has seen revitalized interest in CNNs for computer vision, local connectivity has fallen out of favor. When layer computation is distributed across multiple nodes, weight sharing introduces additional synchronization costs (Krizhevsky, 2014); thus, the first massively parallel deep neural networks employed exclusively locally connected layers (Raina et al., 2009; Uetz and Behnke, 2009; Dean et al., 2012; Le et al., 2012; Coates et al., 2013). Some of the first successful neural networks for computer vision tasks combined convolutional and locally connected layers (Hinton et al., 2012; Goodfellow et al., 2013; Gregor et al., 2014)

, as have networks for face recognition 

(Taigman et al., 2014; Sun et al., 2014, 2015; Yim et al., 2015). However, newer architectures, even those designed for face recognition (Schroff et al., 2015; Liu et al., 2017), generally use convolution exclusively.

Work comparing convolutional and locally connected networks for computer vision tasks has invariably found that CNNs yield better performance. Bartunov et al. (2018) compared the classification performance on multiple image datasets as part of a study on biologically plausible learning algorithms; convolution achieved higher accuracy across datasets. Novak et al. (2018) derived a kernel equivalent to an infinitely wide CNN at initialization and showed that, in this infinite-width limit, CNNs and LCNs are equivalent. They found that SGD-trained CNNs substantially outperform both SGD-trained LCNs and this kernel. However, d’Ascoli et al. (2019) found that initially training a convolution layer and then converting the convolutional layers to equivalent fully connected layers near the end of training led to a slight increase in performance.

Other work has attempted to combine the efficiency of convolution with some of the advantages of local connectivity. Nowlan and Hinton (1992) suggested a “soft weight-sharing” approach that penalizes the difference between the distribution of weights and a mixture of Gaussians. Other work has used periodic weight sharing, also known as tiling, where filters pixels away share weights (Le et al., 2010; Gregor and LeCun, 2010), or subdivided feature maps into patches where weights are shared only within each patch (Zhao et al., 2016). CoordConv (Liu et al., 2018) concatenates feature maps containing the and coordinates of the pixels to the input of a CNN, permitting direct use of position information throughout the network.

Input-dependent low rank local connectivity, which we explore in Section 4.2, is further related to previous work that applies input-dependent convolutional filters. Spatial soft attention mechanisms (Wang et al., 2017; Jetley et al., 2018; Woo et al., 2018; Linsley et al., 2019; Fukui et al., 2019) can be interpreted as a mechanism for applying different weights at different positions via per-position scaling of entire filters. Self-attention (Bahdanau et al., 2015; Vaswani et al., 2017), which has recently been applied to image models (Bello et al., 2019; Ramachandran et al., 2019), provides an alternative mechanism to integrate information over space with content-dependent mixing weights. Other approaches apply the same convolutional filters across space, but select filters or branches separately for each example (McGill and Perona, 2017; Fernando et al., 2017; Gross et al., 2017; Chen et al., 2019; Yang et al., 2019). The dynamic local filtering layer of Jia et al. (2016) uses a neural network to predict a separate set of filters for each position. Our approach predicts only the combining weights for a fixed set of bases, which provides control over the degree of spatial invariance through the size of the layer kernel basis set.

3 Methods

3.1 Preliminaries

Let be an input with channels (: input height, : input width, and : input channels). In convolution layers, the input is convolved with a filter bank to compute (: filter height size, : filter width size, and

: filter output channels). For clarity of presentation, we fix the layer output and input to have the same size, and the stride to be 1, though we relax these constraints in the experiments. More formally, the operation of

on the local input patch of size centered at location , , is:

(1)

where is the output at location and ( is defined as the element-wise multiplication of the input and the filter along the first 3 axes). The spatial invariance of convolution refers to applying the same filter bank to input patches at all locations (Figure 2 left).

Locally connected layers on the other hand do not share weights. Similar to convolution, they apply filters with local receptive fields. However, the filters are not shared across space (Figure 2 right). Formally, each output is computed by applying a different filter bank to the corresponding input patch (i.e., ).

Empirically, locally connected layers perform poorly compared to convolutional layers (Novak et al., 2018). Intuitively, local regions in images are not completely independent and we expect filters learned over one local region to be useful when applied to a nearby region. While locally connected layers are strictly more powerful than convolutional layers and could in theory converge to the convolution solution, in practice they don’t and instead overfit the training data. However, the superior performance of convolution layers over locally connected layers (LeCun, 1989; Bartunov et al., 2018; Novak et al., 2018) does not imply that spatial invariance is strictly required.

Below, we develop methods that control the degree of spatial invariance a layer can have, which allows us to test the hypothesis that spatial invariance may be overly restrictive.

3.2 Low-rank locally connected layer

Here, we design a locally connected layer with a spatial rank parameter that controls the degree of spatial invariance. We adjust the degree of spatial invariance by using a set of local filter banks (basis set) instead of filter bank in a convolution layer or filter banks in a classic locally connected layer (

is a hyperparameter that may be adjusted based on a validation subset;

). For each input patch , we construct a filter bank to operate on that patch that is a linear combination of the members of the basis set. That is,

(2)

where are the weights that combine the filter banks in the basis set and . This formulation is equivalent to a low-rank factorization with rank of the layer locally connected kernel. Thus, we term this layer the “low-rank locally connected” (LRLC) layer (Figure 2 middle).

Note that, in this paper, we use basis set with filters of similar structure. However, this layer could also be used with a basis set containing filters with different structure (e.g., different filter sizes and/or dilation rates).

The filters in the basis set are linearly combined using weights that are specific to each spatial location. In particular, with input size and filter banks in the basis set, we need weights to combine these filter banks and formulate the filter bank at each spatial location. We propose two ways to learn these combining weights. One method learns weights that are shared across all examples while the second method predicts the weights per example based on a function of the input.

3.2.1 Fixed combining weights

The simplest method of learning combining weights is to learn scalars per spatial position. This approach is well-suited to datasets with spatially inhomogeneous features, e.g. datasets of aligned faces. The number of combining weights scales linearly with the number of pixels in the image, which may be large. Thus to reduce parameters, we learn combining weights per-row and per-column of location as follows:

(3)

This formulation reduces the number of combining weights parameters to , which limits the expressivity of the layer (i.e., constrains the maximum degree of relaxation of spatial invariance). This formulation also performs better in practice (Figure Supp.2).

We further normalize the weights to limit the scale of the combined filters. Common choices for normalization are dividing by the weights norm or using the softmax function. In our early experimentation, we found that softmax normalization performs slightly better. Thus, the combining weights are computed as follows:

(4)

The filter banks in the basis set and the combining weights can all be learned end-to-end. In practice, we implement this layer with convolution and point-wise multiplication operations, as in Algorithm 1, rather than forming the equivalent locally connected layer. This implementation choice is due to locally connected layers being slower in practice because current hardware is memory-bandwidth-limited, while convolution is highly optimized and fast. We initialize the combining weights to a constant, which is equivalent to a convolution layer with a random kernel, though our main findings remained the same with or without this initialization (Figure Supp.1).

At training time, the parameter count of the LRLC layer is approximately times that of a corresponding convolutional layer, as is the computational cost of Algorithm 1. However, after the network is trained, the LRLC layer can be converted to a locally connected layer. When convolution is implemented as matrix multiplication, locally connected layers have the same FLOP count as convolution (Figure Supp.4), although the amount of memory needed to store the weights scales with the spatial size of the feature map.

  Input:
  Trainable Parameters: (basis set) (combining weights for rows) (combining weights for columns) (biases for rows) (biases for columns) (biases for channels) Initialize .
  for  to  do
      (convolution with filters in basis set)
      (combining weights)
     
  end for (biases)
  Return: O
Algorithm 1 Low Rank Locally Connected Layer
Spatially varying bias

Typically, a learned bias per channel is added to the output of a convolution. Here, we allow the bias that is added to the LRLC output to also vary spatially. Similar to the combining weights, per-row and per-column biases are learned and are added to the standard channel bias. Formally, we define the layer biases () as:

(5)

where , , and . The special case of the LRLC layer with is equivalent to a convolution operation followed by adding the spatially varying bias. We use this case in our experiments as a simple baseline to test if relaxing spatial invariance in just the biases is enough to see improvements.

figures/results_table1.csv
Table 1: Spatial invariance may be overly restrictive. Top-1 accuracy of different models (mean SE). The optimal rank in LRLC is obtained by evaluating models on a separate validation subset.

Figure 3: Low-rank local connectivity is a good inductive bias for image datasets. Vertical axis shows top-1 test accuracy on digit classification task on images from MNIST dataset (left), object classification task on images from CIFAR-10 dataset (middle), and gender classification task on images from CelebA dataset (right). Horizontal axis shows the locally connected kernel spatial rank used for the low-rank locally connected (LRLC) layer placed at first, second, third, layer or all layers of the network. The accuracy of a regular convolutional network is shown as a dotted black line for reference. Error bars indicate standard errors computed from training models from 10 random initialization. The LRLC layer outperforms classic convolution, suggesting that convolution is overly restrictive and consistent with H2 in Figure 1.
3.2.2 Input-Dependent Combining Weights

The fixed combining weights formulation intuitively will work best when all images are aligned with structure that appears consistently in the same spatial position. Many image datasets have some alignment by construction, and we expect this approach to be particularly successful for such datasets. However, this formulation may not be well-suited to datasets without image alignment. In this section, we describe an extension of the LRLC layer that conditions the combining weights on the input.

Formally, we modify the combining weights in equation 3 to make them a function of the input:

(6)

where is a lightweight neural network that predicts the combining weights for each position. More formally, takes in the input and outputs weights . The predicted weights are then similarly normalized as in equation 4 and are used as before to combine the filter banks in the basis set to form local filters for each spatial location. Similar to section 3.2.1, a spatially varying bias is also applied to the output of the layer. The architecture used for has low computational cost, consisting of several dilated separable convolutions applied in parallel followed by a small series of cheap aggregation layers that output a tensor. The full architecture of is detailed in the supplementary section B and shown in Figure Supp.5.

4 Experiments

We performed classification experiments on MNIST, CIFAR-10, and CelebA datasets. We trained our models without data augmentation or regularization to focus our investigation on the pure effects of the degree of spatial invariance on generalization. In our experiments, we used the Adam optimizer with a maximum learning rate of and a minibatch size of . We trained our models for epochs starting with a linear warmup period of epochs and used a cosine decay schedule afterwards. We used Tensor Processing Unit (TPU) accelerators in all our training.

We conducted our study using a network of layers with channels at each layer and local filters of size

. Each layer is followed by batch normalization and ReLU nonlinearity. The network is followed by a global average pooling operation then a linear fully connected layer to form predictions. Our network had sufficient capacity, and we trained for sufficiently large number of steps to achieve high training accuracy (Table

Supp.2). For all our results, we show the mean accuracy standard error based on models trained from 10 different random initializations. Our division of training, validation and test subsets are shown in Table Supp.1.

4.1 Spatial invariance may be overly restrictive

In this section, we investigate whether relaxing the degree of spatial invariance of a layer is a better inductive bias for image classification. We replaced convolution layers at different depths of the network (first, second, third or at all layers) with the designed low-rank locally connected (LRLC) layer. We varied the spatial rank of the LRLC layer, which controls the deviation degree from spatially invariant convolution layers towards locally connected layers. If the rank is small the network is constrained to share filters more across space and the higher the rank the less sharing is imposed. We trained our models and quantified the generalization accuracy on test data at these different ranks.

When rank is 1, the LRLC layer is equivalent to a convolution layer with an additional spatial bias. Adding this spatial bias to the convolution boosted the accuracy over normal convolution layers (Table 1). Increasing the spatial rank allows the layer to use different filters at different spatial locations, and deviate further from convolution networks. Our results show that doing so further increases accuracy (Figure 3). We find that accuracy of networks with LRLC layers placed at any depth, or with all layers replaced by LRLC layers, is higher than that of pure convolutional networks (Figure 3 and Table 1). These findings provide evidence for the hypothesis that spatial invariance may be overly restrictive. Our results further show that relaxing the spatial invariance late in the network (near the network output) is better than early (at the input). Relaxing the spatial invariance late in the network was also better than doing so at every layer (Table 1). The optimal spatial rank varied across different datasets; rank was the lowest for CIFAR-10 data and was the highest for CelebA.

The LRLC layer has the ability to encode position, which vanilla convolution layers lack. This additional position encoding may explain the increased accuracy. Previous work has attempted to give this capability to convolution networks by augmenting the input with coordinate channels, an approach known as CoordConv (Liu et al., 2018). To test whether the efficacy of the LRLC layer could be explained solely by its ability to encode position, we compared its performance to that of CoordConv. Our results show that CoordConv outperforms vanilla convolution, but still lags behind the LRLC network (Table 2 and Figure 4), suggesting that the inductive bias of the LRLC layer is better-suited to the data. Unlike CoordConv, the LRLC layer allows controlling and adapting the degree of spatial invariance to different datasets by adjusting the spatial rank. However, with CoordConv, this adjustment is not possible. This gives an intuition of why the LRLC layer suits the data better than CoordConv.

figures/results_table2.csv
Table 2: LRLC outperforms baselines. Top-1 accuracy of different models (mean SE). The optimal rank for LRLC and the optimal width for wide convolution models are obtained by evaluating models on a separate validation subset.

Figure 4: LRLC outperforms baselines. Similar to Figure 3, comparing the LRLC layer to different baselines. Baselines include standard locally connected layers, CoordConv (Liu et al., 2018), and convolution networks with wider channels than 64 with width adjusted to match the number of parameters in the LRLC layer. Black markers indicate the best model across spatial ranks for LRLC models and across different widths for wide convolution models. The best models are obtained by performing evaluation on a separate validation subset.
figures/results_table3.csv
Table 3: Fixed vs input-dependent combining weights. Top-1 accuracy of different models (mean SE). The optimal rank is obtained by evaluating models on a separate validation subset.

Figure 5: Input-dependent combining weights. The LRLC layer learns fixed weights to combine filter banks in the basis set and construct a filter bank to be applied to each input location. The input-dependent LRLC layer uses a simple network to adapt the combining weights to different inputs, making it more suitable for less aligned data such as CIFAR-10. The accuracy of the input-dependent LRLC layer substantially exceeds the accuracy of the fixed LRLC layer on CIFAR-10. However, for more spatially aligned datasets such as MNIST and CelebA, input-dependent LRLC yields modest or no improvement.

Although locally connected layers have inference-time FLOP count similar to standard convolution layers, the relaxation of spatial invariance comes at the cost of an increase number of trainable parameters. In particular, the number of trainable parameters in the LRLC layer grows linearly with the spatial rank (ignoring the combining weights and spatial biases as they are relatively small). This increase in model parameters does not explain the superiority of the LRLC layer. Locally connected layers have more trainable parameters than LRLC layers, yet perform worse (Figure 4 and Table 2). Moreover, even after widening convolutional layers to match the trainable parameter count of the LRLC layer, networks with only convolutional layers still do not match the accuracy of networks with low-rank locally connected layers (Figures 4, Supp.3 and Table 2). Thus, in our experiments, LRLC layers appear to provide a better inductive bias independent of parameter count.

4.2 Input-dependent low-rank local connectivity is a better inductive bias for datasets with less alignment

In the previous section, our results show that the optimal spatial rank is dataset-dependent. The spatial rank with highest accuracy (the optimal rank) was different across datasets and was generally far from the full rank (i.e., the spatial size of the input), which gives an intuition why convolution layers work well on images in the context of convolution being closer to the optimal rank compared to the vanilla locally connected layers. The optimal rank seems to depend on alignment in the dataset. For example, the optimal rank was highest for CelebA dataset, which comprises approximately aligned face images. By contrast, on CIFAR-10, the optimal rank was low, which may reflect the absence of alignment in the dataset beyond a weak bias toward objects in the center of the images.

These findings raise the question whether one can achieve more gains if the allocation of local filters across space was not fixed across the whole dataset, but rather was conditioned on the input. To answer this question, we modified the LRLC layer to allow the layer to assign local filters based on the input (see Section 3.2.2). This approach has some resemblance to previous work on input-dependent filters (Yang et al., 2019; Jia et al., 2016). We tested whether using this input-dependent way of selecting local filters can give more gains in the less aligned CIFAR-10 dataset. Our results show that the input-dependent LRLC network indeed achieves higher accuracy on CIFAR-10 compared to the fixed LRLC layer, and yields a higher optimal spatial rank (Figure 5 and Table 3). We also experimented the input-dependent LRLC on MNIST and CelebA. We found that the input-dependent LRLC only helped a little on MNIST and hurt accuracy on CelebA compared to the LRLC with fixed weights (Figure 5 and Table 3). This finding shows that the low-rank local connectivity is a better inductive bias for highly aligned data while the input-dependent low rank local connectivity is better suited to less aligned datasets (Figure 5).

5 Conclusion

In this work, we tested whether spatial invariance, a fundamental property of convolutional layers, is an overly restrictive inductive bias. To address this question, we designed a new locally connected layer (LRLC) where the degree of spatial invariance can be controlled by modifying a spatial rank parameter. This parameter determines the size of the basis set of local filter banks that the layer can use to form local filters at different locations of the input.

Our results show that relaxing spatial invariance using our LRLC layer enhances the accuracy of models over standard convolutional networks, indicating that spatial invariance may be overly restrictive. However, we also found that our proposed LRLC layer achieves higher accuracy than a vanilla locally connected layer, indicating that there are benefits to partial spatial invariance. We show that relaxing spatial invariance in later layers is better than relaxing spatial invariance in early layers. Although the LRLC layer provides benefits across the three datasets we studied, our input-dependent LRLC layer, which adapts local filters to each input, appears to perform even better when data are not well-aligned.

Locally connected layers have largely been ignored by the research community due to the perception that they perform poorly. However, our findings suggest that this pessimism should be reexamined, as locally connected layers with our low-rank parameterization achieve promising performance. Moreover, this new formulation makes local connectivity more useful in practice, as the number of trainable parameters scales with the rank of the LRLC layers, instead of input spatial size as in vanilla locally connected layers. Further work is necessary to capture the advantages of relaxing spatial invariance on practical applications with complex deep models trained on large-scale datasets. One interesting direction to achieve this goal could be to utilize our LRLC formulation and explore using basis set with mixed filter sizes and dilation rates to construct a variety of layers that could suit datasets from different applications.

6 Acknowledgements

We are grateful to Jiquan Ngiam, Pieter-Jan Kindermans, Jascha Sohl-Dickstein, Jaehoon Lee, Daniel Park, Sobhan Naderi, Max Vladymyrov, Hieu Pham, Michael Simbirsky, Roman Novak, Hanie Sedghi, Karthik Murthy, Michael Mozer, and Yani Ioannou for useful discussions and helpful feedback on the manuscript.

References

  • M. J. Arcaro, S. A. McMains, B. D. Singer, and S. Kastner (2009) Retinotopic organization of human ventral visual cortex. Journal of neuroscience 29 (34), pp. 10638–10652. Cited by: §1.
  • D. Bahdanau, K. Cho, and Y. Bengio (2015) Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations, Cited by: §2.
  • S. Bartunov, A. Santoro, B. Richards, L. Marris, G. E. Hinton, and T. Lillicrap (2018) Assessing the scalability of biologically-motivated deep learning algorithms and architectures. In Advances in Neural Information Processing Systems, pp. 9368–9378. Cited by: §1, §1, §2, §3.1.
  • I. Bello, B. Zoph, A. Vaswani, J. Shlens, and Q. V. Le (2019) Attention augmented convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3286–3295. Cited by: §2.
  • L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille (2017a) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence 40 (4), pp. 834–848. Cited by: Appendix B.
  • L. Chen, G. Papandreou, F. Schroff, and H. Adam (2017b) Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587. Cited by: Appendix B.
  • Z. Chen, Y. Li, S. Bengio, and S. Si (2019) You look twice: gaternet for dynamic filter selection in cnns. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 9172–9180. Cited by: §2.
  • A. Coates, B. Huval, T. Wang, D. Wu, B. Catanzaro, and N. Andrew (2013) Deep learning with cots hpc systems. In International conference on machine learning, pp. 1337–1345. Cited by: §2.
  • S. d’Ascoli, L. Sagun, G. Biroli, and J. Bruna (2019) Finding the needle in the haystack with convolutions: on the benefits of architectural bias. In Advances in Neural Information Processing Systems, pp. 9330–9340. Cited by: §2.
  • J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, et al. (2012) Large scale distributed deep networks. In Advances in neural information processing systems, pp. 1223–1231. Cited by: §2.
  • C. Fernando, D. Banarse, C. Blundell, Y. Zwols, D. Ha, A. A. Rusu, A. Pritzel, and D. Wierstra (2017) PathNet: evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734. Cited by: §2.
  • H. Fukui, T. Hirakawa, T. Yamashita, and H. Fujiyoshi (2019) Attention branch network: learning of attention mechanism for visual explanation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §2.
  • K. Fukushima (1975) Cognitron: a self-organizing multilayered neural network. Biological cybernetics 20 (3-4), pp. 121–136. Cited by: §2.
  • K. Fukushima (1980) Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics 36 (4), pp. 193–202. Cited by: §2.
  • I. J. Goodfellow, Y. Bulatov, J. Ibarz, S. Arnoud, and V. Shet (2013) Multi-digit number recognition from street view imagery using deep convolutional neural networks. arXiv preprint arXiv:1312.6082. Cited by: §2.
  • K. Gregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra (2014) Deep autoregressive networks. In International Conference on Machine Learning, Cited by: §2.
  • K. Gregor and Y. LeCun (2010) Emergence of complex-like cells in a temporal product network with local receptive fields. arXiv preprint arXiv:1006.0448. Cited by: §2.
  • S. Gross, M. Ranzato, and A. Szlam (2017) Hard mixtures of experts for large scale weakly supervised vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6865–6873. Cited by: §2.
  • U. Hasson, I. Levy, M. Behrmann, T. Hendler, and R. Malach (2002) Eccentricity bias as an organizing principle for human high-order object areas. Neuron 34 (3), pp. 479–490. Cited by: §1.
  • G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov (2012) Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580. Cited by: §2.
  • J. Hu, L. Shen, and G. Sun (2018) Squeeze-and-excitation networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Appendix B.
  • D. H. Hubel and T. Wiesel (1963) Shape and arrangement of columns in cat’s striate cortex. The Journal of physiology 165 (3), pp. 559–568. Cited by: §2.
  • D. H. Hubel and T. N. Wiesel (1968) Receptive fields and functional architecture of monkey striate cortex. The Journal of physiology 195 (1), pp. 215–243. Cited by: §2.
  • S. Jetley, N. A. Lord, N. Lee, and P. Torr (2018) Learn to pay attention. In International Conference on Learning Representations, External Links: Link Cited by: §2.
  • X. Jia, B. De Brabandere, T. Tuytelaars, and L. V. Gool (2016) Dynamic filter networks. In Advances in Neural Information Processing Systems, pp. 667–675. Cited by: §2, §4.2.
  • A. Krizhevsky (2014) One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997. Cited by: §2.
  • R. Lafer-Sousa and B. R. Conway (2013) Parallel, multi-stage processing of colors, faces and shapes in macaque inferior temporal cortex. Nature neuroscience 16 (12), pp. 1870. Cited by: §1.
  • Q. V. Le, J. Ngiam, Z. Chen, D. Chia, P. W. Koh, and A. Y. Ng (2010) Tiled convolutional neural networks. In Advances in neural information processing systems, pp. 1279–1287. Cited by: §2.
  • Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng (2012)

    Building high-level features using large scale unsupervised learning

    .
    In Proceedings of the 29th International Coference on International Conference on Machine Learning, pp. 507–514. Cited by: §2.
  • Y. LeCun, Y. Bengio, and G. Hinton (2015) Deep learning. nature 521 (7553), pp. 436–444. Cited by: §1.
  • Y. LeCun (1989) Generalization and network design strategies. In Connectionism in Perspective, R. Pfeifer, Z. Schreter, F. Fogelman, and L. Steels (Eds.), Zurich, Switzerland. Note: an extended version was published as a technical report of the University of Toronto Cited by: §1, §1, §2, §3.1.
  • D. Linsley, D. Shiebler, S. Eberhardt, and T. Serre (2019) Learning what and where to attend with humans in the loop. In International Conference on Learning Representations, External Links: Link Cited by: §2.
  • R. Liu, J. Lehman, P. Molino, F. P. Such, E. Frank, A. Sergeev, and J. Yosinski (2018) An intriguing failing of convolutional neural networks and the CoordConv solution. In Advances in Neural Information Processing Systems, pp. 9605–9616. Cited by: §2, Figure 4, §4.1.
  • W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song (2017) SphereFace: deep hypersphere embedding for face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 212–220. Cited by: §2.
  • M. S. Livingstone, J. L. Vincent, M. J. Arcaro, K. Srihasam, P. F. Schade, and T. Savage (2017) Development of the macaque face-patch system. Nature communications 8 (1), pp. 1–12. Cited by: §1.
  • M. McGill and P. Perona (2017) Deciding how to decide: dynamic routing in artificial neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2363–2372. Cited by: §2.
  • R. Novak, L. Xiao, Y. Bahri, J. Lee, G. Yang, J. Hron, D. A. Abolafia, J. Pennington, and J. Sohl-Dickstein (2018) Bayesian deep convolutional networks with many channels are gaussian processes. Cited by: §1, §1, §2, §3.1.
  • S. J. Nowlan and G. E. Hinton (1992) Simplifying neural networks by soft weight-sharing. Neural computation 4 (4), pp. 473–493. Cited by: §2.
  • B. A. Olshausen and D. J. Field (1996) Natural image statistics and efficient coding. Network: computation in neural systems 7 (2), pp. 333–339. Cited by: Figure 1.
  • R. Raina, A. Madhavan, and A. Y. Ng (2009) Large-scale deep unsupervised learning using graphics processors. In Proceedings of the 26th annual international conference on machine learning, pp. 873–880. Cited by: §2.
  • R. Rajimehr, N. Y. Bilenko, W. Vanduffel, and R. B. Tootell (2014) Retinotopy versus face selectivity in macaque visual cortex. Journal of cognitive neuroscience 26 (12), pp. 2691–2700. Cited by: §1.
  • P. Ramachandran, N. Parmar, A. Vaswani, I. Bello, A. Levskaya, and J. Shlens (2019) Stand-alone self-attention in vision models. In Advances in Neural Information Processing Systems, Cited by: §2.
  • D. L. Ruderman and W. Bialek (1994) Statistics of natural images: scaling in the woods. In Advances in neural information processing systems, pp. 551–558. Cited by: Figure 1.
  • Z. M. Saygin, D. E. Osher, E. S. Norton, D. A. Youssoufian, S. D. Beach, J. Feather, N. Gaab, J. D. Gabrieli, and N. Kanwisher (2016) Connectivity precedes function in the development of the visual word form area. Nature neuroscience 19 (9), pp. 1250–1255. Cited by: §1.
  • F. Schroff, D. Kalenichenko, and J. Philbin (2015) FaceNet: a unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 815–823. Cited by: §2.
  • E. P. Simoncelli and B. A. Olshausen (2001) Natural image statistics and neural representation. Annual review of neuroscience 24 (1), pp. 1193–1216. Cited by: Figure 1.
  • K. Srihasam, J. L. Vincent, and M. S. Livingstone (2014) Novel domain formation reveals proto-architecture in inferotemporal cortex. Nature neuroscience 17 (12), pp. 1776. Cited by: §1.
  • Y. Sun, D. Liang, X. Wang, and X. Tang (2015) Deepid3: face recognition with very deep neural networks. arXiv preprint arXiv:1502.00873. Cited by: §2.
  • Y. Sun, X. Wang, and X. Tang (2014) Deep learning face representation from predicting 10,000 classes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1891–1898. Cited by: §2.
  • Y. Taigman, M. Yang, M. Ranzato, and L. Wolf (2014) DeepFace: closing the gap to human-level performance in face verification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1701–1708. Cited by: §2.
  • R. Uetz and S. Behnke (2009) Large-scale object recognition with cuda-accelerated hierarchical neural networks. In 2009 IEEE international conference on intelligent computing and intelligent systems, Vol. 1, pp. 536–541. Cited by: §2.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §2.
  • C. Von der Malsburg (1973) Self-organization of orientation sensitive cells in the striate cortex. Kybernetik 14 (2), pp. 85–100. Cited by: §2.
  • F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, and X. Tang (2017) Residual attention network for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3156–3164. Cited by: §2.
  • S. Woo, J. Park, J. Lee, and I. So Kweon (2018) CBAM: convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19. Cited by: §2.
  • B. Yang, G. Bender, Q. V. Le, and J. Ngiam (2019) CondConv: conditionally parameterized convolutions for efficient inference. In Advances in Neural Information Processing Systems, pp. 1305–1316. Cited by: §2, §4.2.
  • J. Yim, H. Jung, B. Yoo, C. Choi, D. Park, and J. Kim (2015) Rotating your face using multi-task deep neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 676–684. Cited by: §2.
  • F. Yu and V. Koltun (2015) Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122. Cited by: Appendix B.
  • K. Zhao, W. Chu, and H. Zhang (2016) Deep region and multi-label learning for facial action unit detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3391–3399. Cited by: §2.

Appendix A Supplementary figures

Figure Supp.1: Structured vs unstructured initialization. Top 1 accuracy similar to Figure 3. We study the effect of the structured initialization we used in our experiments for the LRLC layers (i.e., initialization to a convolution layer with a random kernel). In the structured initialization, we initialized the layer combining weights to constant equal to . We compared this initialization to a random initialization of the combining weights. Our results show that the structured initialization is generally quite similar to the unstructured initialization. Error bars indicate standard errors computed from training models from 10 different random initialization.

Figure Supp.2: Factorized vs full combining weights and biases. Top 1 accuracy similar to Figure 3. We study the effect of factorizing the combining weights and biases in Equations 3 and 5. We compare the performance of a LRLC layer with factorized weights and bias to a LRLC without this factorization. The layer with the factorization seems to perform better.

Figure Supp.3: Accuracy as a function of model parameters. Classification accuracy as a function of network parameters. Error bars indicate standard errors computed from training models from 10 different random initialization.

Figure Supp.4: Computational cost as function of the spatial rank of the locally connected kernel. As the spatial rank of the locally connected kernel increases, the computational cost, as measured by the number of floating point operations (FLOPS), of the input-dependent LRLC layer and the convolution layer with similar trainable parameter (wide convolution) grows at a similar rate, while the computational cost of the LRLC layer stays constant because it can be converted into a locally connected layer at inference time.

Figure Supp.5: Input-dependent combining weights network architecture.

Appendix B Input-dependent combining weights network

The architecture of the input-dependent combining weights network () is illustrated in Figure Supp.5. The initial operation of is to project the input channels to a low-dimensional space using a convolution. This projection is used to allow to have small number of parameters, and also because selection of filter banks in the basis set is potentially a simpler task than the classification task the network is performing. Motivated by work on segmentation (Chen et al., 2017a; Yu and Koltun, 2015; Chen et al., 2017b), the second operation collects statistics across different scales of the input using parallel pooling and dilated depth-wise convolution layers followed by bilinear resizing. Note the increase in parameters here is small due to the initial projection step and the use of depth-wise convolution. The next stage is a nonlinear low-dimensional bottleneck followed by nonlinear dimensionality expansion with convolution. This operation has similar flavor to the Squeeze-and-Excitation operation (Hu et al., 2018), and is included to give the power to learn useful embedding of the input. The last layer is a linear convolution that reduce the channels size to the spatial rank.

Appendix C Supplementary tables

subset MNIST CIFAR-10 CELEBA
Train 55000 5000 10000
Validation 45000 5000 10000
Test 162770 19867 19962
Table Supp.1: Number of examples in each dataset.
figures/results_table_top1_train.csv

CONVOLUTION (FC) is a convolution network with a fully connected last layer and without global average pooling.

Table Supp.2: Summary of results (train subset). Top-1 train accuracy of different models (mean SE).
figures/results_table_top1_valid.csv

CONVOLUTION (FC) is a convolution network with a fully connected last layer and without global average pooling.

Table Supp.3: Summary of results (validation subset). Top-1 test accuracy of different models (mean SE).
figures/results_table_top1_test.csv

CONVOLUTION (FC) is a convolution network with a fully connected last layer and without global average pooling.

Table Supp.4: Summary of results (test subset). Top-1 test accuracy of different models (mean SE). The optimal rank in LRLC and the optimal width in wide convolution models are obtained by evaluating models on a separate validation subset.