Convolutional neural networks (CNNs) are now the dominant approach across many computer vision tasks. Convolution layers possess two main properties that are believed to be key to their success: local receptive fields and spatially invariant filters. In this work, we seek to revisit the latter. Previous work comparing convolutional layers, which share filters across all spatial locations, with locally connected layers, which have no weight sharing, has found that convolution is advantageous on common datasets (LeCun, 1989; Bartunov et al., 2018; Novak et al., 2018). However, this observation leaves open the possibility that some departure from spatial invariance could outperform both convolution and local connectivity (Figure 1).
The structure of CNNs is often likened to the primate visual system (LeCun et al., 2015)
. However, the brain has no direct mechanism to share weights across space. Like feature maps in CNNs, visual areas are broadly organized according to neurons’ selectivity to retinal position, but in higher-level visual areas, position selectivity is weaker. Instead, high-level visual areas subdivide into stimulus-selective subregions(Hasson et al., 2002; Arcaro et al., 2009; Lafer-Sousa and Conway, 2013; Rajimehr et al., 2014; Srihasam et al., 2014; Saygin et al., 2016; Livingstone et al., 2017)
. This gradual progression from organization by position selectivity to organization by feature selectivity is more consistent with local connectivity than convolution.
Motivated by the lack of synaptic weight sharing in the brain, we hypothesized that neural networks could achieve greater performance by relaxing spatial invariance (Figure 1). Particularly at higher layers of the neural network, where receptive fields cover most or all of the image, applying the same weights at all locations may be a less efficient use of computation than applying different weights at different locations. However, evidence suggests that typical datasets are too small to constrain the parameters of a locally connected layer; functions expressible by convolutional layers are a subset of those expressible by locally connected layers, yet convolution typically achieves higher performance (LeCun, 1989; Bartunov et al., 2018; Novak et al., 2018).
To get intuition for why some relaxation of spatial invariance could be useful, consider images of natural scenes with ground and sky regions. It may be a bad idea to apply different local filters to different parts of the sky with similar appearance. However, it may also be overly limiting to apply the same filter bank to both the sky and the ground regions. Some degree of relaxation of the spatial invariance, such as a different sky and ground filters, may better suit this hypothetical data.
To test the hypothesis that spatial invariance is an overly restrictive inductive bias, we create a new tool that allows us to relax spatial invariance. We develop a low-rank locally connected (LRLC) layer that can parametrically adjust the degree of spatial invariance. This layer is one particular method to relax spatial invariance by reducing weight sharing. Rather than learning a single filter bank to apply at all positions, as in a convolutional layer, or different filter banks, as in a locally connected layer, the LRLC layer learns a set of filter banks, which are linearly combined using combining weights per spatial position (Figure 2).
In our experiments, we find that relaxing spatial invariance with the LRLC layer leads to better performance compared to both convolutional and locally connected layers across three datasets (MNIST, CIFAR-10, and CelebA). These results suggest that some level of relaxation of spatial invariance is a better inductive bias for image datasets compared to the spatial invariance enforced by convolution layers or lack of spatial invariance in locally connected layers.
2 Related Work
The idea of local connectivity in connectionist models predates the popularity of backpropagation and convolution. Inspired by the organization of visual cortex(Hubel and Wiesel, 1963, 1968), several early neural network models consisted of one or more two-dimensional feature maps where neurons preferentially receive input from other neurons at nearby locations (Von der Malsburg, 1973; Fukushima, 1975). Breaking with biology, the Neocognitron (Fukushima, 1980) shared weights across spatial locations, resulting in spatial invariance. However, the Neocognitron was trained using a competitive learning algorithm rather than gradient descent. LeCun (1989) combined weight sharing with backpropagation, demonstrating considerable gains over locally connected networks (LCNs) on a digit recognition task.
Although the last decade has seen revitalized interest in CNNs for computer vision, local connectivity has fallen out of favor. When layer computation is distributed across multiple nodes, weight sharing introduces additional synchronization costs (Krizhevsky, 2014); thus, the first massively parallel deep neural networks employed exclusively locally connected layers (Raina et al., 2009; Uetz and Behnke, 2009; Dean et al., 2012; Le et al., 2012; Coates et al., 2013). Some of the first successful neural networks for computer vision tasks combined convolutional and locally connected layers (Hinton et al., 2012; Goodfellow et al., 2013; Gregor et al., 2014)
, as have networks for face recognition(Taigman et al., 2014; Sun et al., 2014, 2015; Yim et al., 2015). However, newer architectures, even those designed for face recognition (Schroff et al., 2015; Liu et al., 2017), generally use convolution exclusively.
Work comparing convolutional and locally connected networks for computer vision tasks has invariably found that CNNs yield better performance. Bartunov et al. (2018) compared the classification performance on multiple image datasets as part of a study on biologically plausible learning algorithms; convolution achieved higher accuracy across datasets. Novak et al. (2018) derived a kernel equivalent to an infinitely wide CNN at initialization and showed that, in this infinite-width limit, CNNs and LCNs are equivalent. They found that SGD-trained CNNs substantially outperform both SGD-trained LCNs and this kernel. However, d’Ascoli et al. (2019) found that initially training a convolution layer and then converting the convolutional layers to equivalent fully connected layers near the end of training led to a slight increase in performance.
Other work has attempted to combine the efficiency of convolution with some of the advantages of local connectivity. Nowlan and Hinton (1992) suggested a “soft weight-sharing” approach that penalizes the difference between the distribution of weights and a mixture of Gaussians. Other work has used periodic weight sharing, also known as tiling, where filters pixels away share weights (Le et al., 2010; Gregor and LeCun, 2010), or subdivided feature maps into patches where weights are shared only within each patch (Zhao et al., 2016). CoordConv (Liu et al., 2018) concatenates feature maps containing the and coordinates of the pixels to the input of a CNN, permitting direct use of position information throughout the network.
Input-dependent low rank local connectivity, which we explore in Section 4.2, is further related to previous work that applies input-dependent convolutional filters. Spatial soft attention mechanisms (Wang et al., 2017; Jetley et al., 2018; Woo et al., 2018; Linsley et al., 2019; Fukui et al., 2019) can be interpreted as a mechanism for applying different weights at different positions via per-position scaling of entire filters. Self-attention (Bahdanau et al., 2015; Vaswani et al., 2017), which has recently been applied to image models (Bello et al., 2019; Ramachandran et al., 2019), provides an alternative mechanism to integrate information over space with content-dependent mixing weights. Other approaches apply the same convolutional filters across space, but select filters or branches separately for each example (McGill and Perona, 2017; Fernando et al., 2017; Gross et al., 2017; Chen et al., 2019; Yang et al., 2019). The dynamic local filtering layer of Jia et al. (2016) uses a neural network to predict a separate set of filters for each position. Our approach predicts only the combining weights for a fixed set of bases, which provides control over the degree of spatial invariance through the size of the layer kernel basis set.
Let be an input with channels (: input height, : input width, and : input channels). In convolution layers, the input is convolved with a filter bank to compute (: filter height size, : filter width size, and
: filter output channels). For clarity of presentation, we fix the layer output and input to have the same size, and the stride to be 1, though we relax these constraints in the experiments. More formally, the operation ofon the local input patch of size centered at location , , is:
where is the output at location and ( is defined as the element-wise multiplication of the input and the filter along the first 3 axes). The spatial invariance of convolution refers to applying the same filter bank to input patches at all locations (Figure 2 left).
Locally connected layers on the other hand do not share weights. Similar to convolution, they apply filters with local receptive fields. However, the filters are not shared across space (Figure 2 right). Formally, each output is computed by applying a different filter bank to the corresponding input patch (i.e., ).
Empirically, locally connected layers perform poorly compared to convolutional layers (Novak et al., 2018). Intuitively, local regions in images are not completely independent and we expect filters learned over one local region to be useful when applied to a nearby region. While locally connected layers are strictly more powerful than convolutional layers and could in theory converge to the convolution solution, in practice they don’t and instead overfit the training data. However, the superior performance of convolution layers over locally connected layers (LeCun, 1989; Bartunov et al., 2018; Novak et al., 2018) does not imply that spatial invariance is strictly required.
Below, we develop methods that control the degree of spatial invariance a layer can have, which allows us to test the hypothesis that spatial invariance may be overly restrictive.
3.2 Low-rank locally connected layer
Here, we design a locally connected layer with a spatial rank parameter that controls the degree of spatial invariance. We adjust the degree of spatial invariance by using a set of local filter banks (basis set) instead of filter bank in a convolution layer or filter banks in a classic locally connected layer (
is a hyperparameter that may be adjusted based on a validation subset;). For each input patch , we construct a filter bank to operate on that patch that is a linear combination of the members of the basis set. That is,
where are the weights that combine the filter banks in the basis set and . This formulation is equivalent to a low-rank factorization with rank of the layer locally connected kernel. Thus, we term this layer the “low-rank locally connected” (LRLC) layer (Figure 2 middle).
Note that, in this paper, we use basis set with filters of similar structure. However, this layer could also be used with a basis set containing filters with different structure (e.g., different filter sizes and/or dilation rates).
The filters in the basis set are linearly combined using weights that are specific to each spatial location. In particular, with input size and filter banks in the basis set, we need weights to combine these filter banks and formulate the filter bank at each spatial location. We propose two ways to learn these combining weights. One method learns weights that are shared across all examples while the second method predicts the weights per example based on a function of the input.
3.2.1 Fixed combining weights
The simplest method of learning combining weights is to learn scalars per spatial position. This approach is well-suited to datasets with spatially inhomogeneous features, e.g. datasets of aligned faces. The number of combining weights scales linearly with the number of pixels in the image, which may be large. Thus to reduce parameters, we learn combining weights per-row and per-column of location as follows:
This formulation reduces the number of combining weights parameters to , which limits the expressivity of the layer (i.e., constrains the maximum degree of relaxation of spatial invariance). This formulation also performs better in practice (Figure Supp.2).
We further normalize the weights to limit the scale of the combined filters. Common choices for normalization are dividing by the weights norm or using the softmax function. In our early experimentation, we found that softmax normalization performs slightly better. Thus, the combining weights are computed as follows:
The filter banks in the basis set and the combining weights can all be learned end-to-end. In practice, we implement this layer with convolution and point-wise multiplication operations, as in Algorithm 1, rather than forming the equivalent locally connected layer. This implementation choice is due to locally connected layers being slower in practice because current hardware is memory-bandwidth-limited, while convolution is highly optimized and fast. We initialize the combining weights to a constant, which is equivalent to a convolution layer with a random kernel, though our main findings remained the same with or without this initialization (Figure Supp.1).
At training time, the parameter count of the LRLC layer is approximately times that of a corresponding convolutional layer, as is the computational cost of Algorithm 1. However, after the network is trained, the LRLC layer can be converted to a locally connected layer. When convolution is implemented as matrix multiplication, locally connected layers have the same FLOP count as convolution (Figure Supp.4), although the amount of memory needed to store the weights scales with the spatial size of the feature map.
Spatially varying bias
Typically, a learned bias per channel is added to the output of a convolution. Here, we allow the bias that is added to the LRLC output to also vary spatially. Similar to the combining weights, per-row and per-column biases are learned and are added to the standard channel bias. Formally, we define the layer biases () as:
where , , and . The special case of the LRLC layer with is equivalent to a convolution operation followed by adding the spatially varying bias. We use this case in our experiments as a simple baseline to test if relaxing spatial invariance in just the biases is enough to see improvements.
3.2.2 Input-Dependent Combining Weights
The fixed combining weights formulation intuitively will work best when all images are aligned with structure that appears consistently in the same spatial position. Many image datasets have some alignment by construction, and we expect this approach to be particularly successful for such datasets. However, this formulation may not be well-suited to datasets without image alignment. In this section, we describe an extension of the LRLC layer that conditions the combining weights on the input.
Formally, we modify the combining weights in equation 3 to make them a function of the input:
where is a lightweight neural network that predicts the combining weights for each position. More formally, takes in the input and outputs weights . The predicted weights are then similarly normalized as in equation 4 and are used as before to combine the filter banks in the basis set to form local filters for each spatial location. Similar to section 3.2.1, a spatially varying bias is also applied to the output of the layer. The architecture used for has low computational cost, consisting of several dilated separable convolutions applied in parallel followed by a small series of cheap aggregation layers that output a tensor. The full architecture of is detailed in the supplementary section B and shown in Figure Supp.5.
We performed classification experiments on MNIST, CIFAR-10, and CelebA datasets. We trained our models without data augmentation or regularization to focus our investigation on the pure effects of the degree of spatial invariance on generalization. In our experiments, we used the Adam optimizer with a maximum learning rate of and a minibatch size of . We trained our models for epochs starting with a linear warmup period of epochs and used a cosine decay schedule afterwards. We used Tensor Processing Unit (TPU) accelerators in all our training.
We conducted our study using a network of layers with channels at each layer and local filters of size
. Each layer is followed by batch normalization and ReLU nonlinearity. The network is followed by a global average pooling operation then a linear fully connected layer to form predictions. Our network had sufficient capacity, and we trained for sufficiently large number of steps to achieve high training accuracy (TableSupp.2). For all our results, we show the mean accuracy standard error based on models trained from 10 different random initializations. Our division of training, validation and test subsets are shown in Table Supp.1.
4.1 Spatial invariance may be overly restrictive
In this section, we investigate whether relaxing the degree of spatial invariance of a layer is a better inductive bias for image classification. We replaced convolution layers at different depths of the network (first, second, third or at all layers) with the designed low-rank locally connected (LRLC) layer. We varied the spatial rank of the LRLC layer, which controls the deviation degree from spatially invariant convolution layers towards locally connected layers. If the rank is small the network is constrained to share filters more across space and the higher the rank the less sharing is imposed. We trained our models and quantified the generalization accuracy on test data at these different ranks.
When rank is 1, the LRLC layer is equivalent to a convolution layer with an additional spatial bias. Adding this spatial bias to the convolution boosted the accuracy over normal convolution layers (Table 1). Increasing the spatial rank allows the layer to use different filters at different spatial locations, and deviate further from convolution networks. Our results show that doing so further increases accuracy (Figure 3). We find that accuracy of networks with LRLC layers placed at any depth, or with all layers replaced by LRLC layers, is higher than that of pure convolutional networks (Figure 3 and Table 1). These findings provide evidence for the hypothesis that spatial invariance may be overly restrictive. Our results further show that relaxing the spatial invariance late in the network (near the network output) is better than early (at the input). Relaxing the spatial invariance late in the network was also better than doing so at every layer (Table 1). The optimal spatial rank varied across different datasets; rank was the lowest for CIFAR-10 data and was the highest for CelebA.
The LRLC layer has the ability to encode position, which vanilla convolution layers lack. This additional position encoding may explain the increased accuracy. Previous work has attempted to give this capability to convolution networks by augmenting the input with coordinate channels, an approach known as CoordConv (Liu et al., 2018). To test whether the efficacy of the LRLC layer could be explained solely by its ability to encode position, we compared its performance to that of CoordConv. Our results show that CoordConv outperforms vanilla convolution, but still lags behind the LRLC network (Table 2 and Figure 4), suggesting that the inductive bias of the LRLC layer is better-suited to the data. Unlike CoordConv, the LRLC layer allows controlling and adapting the degree of spatial invariance to different datasets by adjusting the spatial rank. However, with CoordConv, this adjustment is not possible. This gives an intuition of why the LRLC layer suits the data better than CoordConv.
Although locally connected layers have inference-time FLOP count similar to standard convolution layers, the relaxation of spatial invariance comes at the cost of an increase number of trainable parameters. In particular, the number of trainable parameters in the LRLC layer grows linearly with the spatial rank (ignoring the combining weights and spatial biases as they are relatively small). This increase in model parameters does not explain the superiority of the LRLC layer. Locally connected layers have more trainable parameters than LRLC layers, yet perform worse (Figure 4 and Table 2). Moreover, even after widening convolutional layers to match the trainable parameter count of the LRLC layer, networks with only convolutional layers still do not match the accuracy of networks with low-rank locally connected layers (Figures 4, Supp.3 and Table 2). Thus, in our experiments, LRLC layers appear to provide a better inductive bias independent of parameter count.
4.2 Input-dependent low-rank local connectivity is a better inductive bias for datasets with less alignment
In the previous section, our results show that the optimal spatial rank is dataset-dependent. The spatial rank with highest accuracy (the optimal rank) was different across datasets and was generally far from the full rank (i.e., the spatial size of the input), which gives an intuition why convolution layers work well on images in the context of convolution being closer to the optimal rank compared to the vanilla locally connected layers. The optimal rank seems to depend on alignment in the dataset. For example, the optimal rank was highest for CelebA dataset, which comprises approximately aligned face images. By contrast, on CIFAR-10, the optimal rank was low, which may reflect the absence of alignment in the dataset beyond a weak bias toward objects in the center of the images.
These findings raise the question whether one can achieve more gains if the allocation of local filters across space was not fixed across the whole dataset, but rather was conditioned on the input. To answer this question, we modified the LRLC layer to allow the layer to assign local filters based on the input (see Section 3.2.2). This approach has some resemblance to previous work on input-dependent filters (Yang et al., 2019; Jia et al., 2016). We tested whether using this input-dependent way of selecting local filters can give more gains in the less aligned CIFAR-10 dataset. Our results show that the input-dependent LRLC network indeed achieves higher accuracy on CIFAR-10 compared to the fixed LRLC layer, and yields a higher optimal spatial rank (Figure 5 and Table 3). We also experimented the input-dependent LRLC on MNIST and CelebA. We found that the input-dependent LRLC only helped a little on MNIST and hurt accuracy on CelebA compared to the LRLC with fixed weights (Figure 5 and Table 3). This finding shows that the low-rank local connectivity is a better inductive bias for highly aligned data while the input-dependent low rank local connectivity is better suited to less aligned datasets (Figure 5).
In this work, we tested whether spatial invariance, a fundamental property of convolutional layers, is an overly restrictive inductive bias. To address this question, we designed a new locally connected layer (LRLC) where the degree of spatial invariance can be controlled by modifying a spatial rank parameter. This parameter determines the size of the basis set of local filter banks that the layer can use to form local filters at different locations of the input.
Our results show that relaxing spatial invariance using our LRLC layer enhances the accuracy of models over standard convolutional networks, indicating that spatial invariance may be overly restrictive. However, we also found that our proposed LRLC layer achieves higher accuracy than a vanilla locally connected layer, indicating that there are benefits to partial spatial invariance. We show that relaxing spatial invariance in later layers is better than relaxing spatial invariance in early layers. Although the LRLC layer provides benefits across the three datasets we studied, our input-dependent LRLC layer, which adapts local filters to each input, appears to perform even better when data are not well-aligned.
Locally connected layers have largely been ignored by the research community due to the perception that they perform poorly. However, our findings suggest that this pessimism should be reexamined, as locally connected layers with our low-rank parameterization achieve promising performance. Moreover, this new formulation makes local connectivity more useful in practice, as the number of trainable parameters scales with the rank of the LRLC layers, instead of input spatial size as in vanilla locally connected layers. Further work is necessary to capture the advantages of relaxing spatial invariance on practical applications with complex deep models trained on large-scale datasets. One interesting direction to achieve this goal could be to utilize our LRLC formulation and explore using basis set with mixed filter sizes and dilation rates to construct a variety of layers that could suit datasets from different applications.
We are grateful to Jiquan Ngiam, Pieter-Jan Kindermans, Jascha Sohl-Dickstein, Jaehoon Lee, Daniel Park, Sobhan Naderi, Max Vladymyrov, Hieu Pham, Michael Simbirsky, Roman Novak, Hanie Sedghi, Karthik Murthy, Michael Mozer, and Yani Ioannou for useful discussions and helpful feedback on the manuscript.
- Retinotopic organization of human ventral visual cortex. Journal of neuroscience 29 (34), pp. 10638–10652. Cited by: §1.
- Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations, Cited by: §2.
- Assessing the scalability of biologically-motivated deep learning algorithms and architectures. In Advances in Neural Information Processing Systems, pp. 9368–9378. Cited by: §1, §1, §2, §3.1.
- Attention augmented convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3286–3295. Cited by: §2.
- Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence 40 (4), pp. 834–848. Cited by: Appendix B.
- Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587. Cited by: Appendix B.
You look twice: gaternet for dynamic filter selection in cnns.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9172–9180. Cited by: §2.
- Deep learning with cots hpc systems. In International conference on machine learning, pp. 1337–1345. Cited by: §2.
- Finding the needle in the haystack with convolutions: on the benefits of architectural bias. In Advances in Neural Information Processing Systems, pp. 9330–9340. Cited by: §2.
- Large scale distributed deep networks. In Advances in neural information processing systems, pp. 1223–1231. Cited by: §2.
- PathNet: evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734. Cited by: §2.
- Attention branch network: learning of attention mechanism for visual explanation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §2.
- Cognitron: a self-organizing multilayered neural network. Biological cybernetics 20 (3-4), pp. 121–136. Cited by: §2.
- Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics 36 (4), pp. 193–202. Cited by: §2.
- Multi-digit number recognition from street view imagery using deep convolutional neural networks. arXiv preprint arXiv:1312.6082. Cited by: §2.
- Deep autoregressive networks. In International Conference on Machine Learning, Cited by: §2.
- Emergence of complex-like cells in a temporal product network with local receptive fields. arXiv preprint arXiv:1006.0448. Cited by: §2.
- Hard mixtures of experts for large scale weakly supervised vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6865–6873. Cited by: §2.
- Eccentricity bias as an organizing principle for human high-order object areas. Neuron 34 (3), pp. 479–490. Cited by: §1.
- Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580. Cited by: §2.
- Squeeze-and-excitation networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Appendix B.
- Shape and arrangement of columns in cat’s striate cortex. The Journal of physiology 165 (3), pp. 559–568. Cited by: §2.
- Receptive fields and functional architecture of monkey striate cortex. The Journal of physiology 195 (1), pp. 215–243. Cited by: §2.
- Learn to pay attention. In International Conference on Learning Representations, External Links: Cited by: §2.
- Dynamic filter networks. In Advances in Neural Information Processing Systems, pp. 667–675. Cited by: §2, §4.2.
- One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997. Cited by: §2.
- Parallel, multi-stage processing of colors, faces and shapes in macaque inferior temporal cortex. Nature neuroscience 16 (12), pp. 1870. Cited by: §1.
- Tiled convolutional neural networks. In Advances in neural information processing systems, pp. 1279–1287. Cited by: §2.
Building high-level features using large scale unsupervised learning. In Proceedings of the 29th International Coference on International Conference on Machine Learning, pp. 507–514. Cited by: §2.
- Deep learning. nature 521 (7553), pp. 436–444. Cited by: §1.
- Generalization and network design strategies. In Connectionism in Perspective, R. Pfeifer, Z. Schreter, F. Fogelman, and L. Steels (Eds.), Zurich, Switzerland. Note: an extended version was published as a technical report of the University of Toronto Cited by: §1, §1, §2, §3.1.
- Learning what and where to attend with humans in the loop. In International Conference on Learning Representations, External Links: Cited by: §2.
- An intriguing failing of convolutional neural networks and the CoordConv solution. In Advances in Neural Information Processing Systems, pp. 9605–9616. Cited by: §2, Figure 4, §4.1.
- SphereFace: deep hypersphere embedding for face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 212–220. Cited by: §2.
- Development of the macaque face-patch system. Nature communications 8 (1), pp. 1–12. Cited by: §1.
- Deciding how to decide: dynamic routing in artificial neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2363–2372. Cited by: §2.
- Bayesian deep convolutional networks with many channels are gaussian processes. Cited by: §1, §1, §2, §3.1.
- Simplifying neural networks by soft weight-sharing. Neural computation 4 (4), pp. 473–493. Cited by: §2.
- Natural image statistics and efficient coding. Network: computation in neural systems 7 (2), pp. 333–339. Cited by: Figure 1.
- Large-scale deep unsupervised learning using graphics processors. In Proceedings of the 26th annual international conference on machine learning, pp. 873–880. Cited by: §2.
- Retinotopy versus face selectivity in macaque visual cortex. Journal of cognitive neuroscience 26 (12), pp. 2691–2700. Cited by: §1.
- Stand-alone self-attention in vision models. In Advances in Neural Information Processing Systems, Cited by: §2.
- Statistics of natural images: scaling in the woods. In Advances in neural information processing systems, pp. 551–558. Cited by: Figure 1.
- Connectivity precedes function in the development of the visual word form area. Nature neuroscience 19 (9), pp. 1250–1255. Cited by: §1.
- FaceNet: a unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 815–823. Cited by: §2.
- Natural image statistics and neural representation. Annual review of neuroscience 24 (1), pp. 1193–1216. Cited by: Figure 1.
- Novel domain formation reveals proto-architecture in inferotemporal cortex. Nature neuroscience 17 (12), pp. 1776. Cited by: §1.
- Deepid3: face recognition with very deep neural networks. arXiv preprint arXiv:1502.00873. Cited by: §2.
- Deep learning face representation from predicting 10,000 classes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1891–1898. Cited by: §2.
- DeepFace: closing the gap to human-level performance in face verification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1701–1708. Cited by: §2.
- Large-scale object recognition with cuda-accelerated hierarchical neural networks. In 2009 IEEE international conference on intelligent computing and intelligent systems, Vol. 1, pp. 536–541. Cited by: §2.
- Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §2.
- Self-organization of orientation sensitive cells in the striate cortex. Kybernetik 14 (2), pp. 85–100. Cited by: §2.
- Residual attention network for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3156–3164. Cited by: §2.
- CBAM: convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19. Cited by: §2.
- CondConv: conditionally parameterized convolutions for efficient inference. In Advances in Neural Information Processing Systems, pp. 1305–1316. Cited by: §2, §4.2.
- Rotating your face using multi-task deep neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 676–684. Cited by: §2.
- Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122. Cited by: Appendix B.
- Deep region and multi-label learning for facial action unit detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3391–3399. Cited by: §2.
Appendix A Supplementary figures
Appendix B Input-dependent combining weights network
The architecture of the input-dependent combining weights network () is illustrated in Figure Supp.5. The initial operation of is to project the input channels to a low-dimensional space using a convolution. This projection is used to allow to have small number of parameters, and also because selection of filter banks in the basis set is potentially a simpler task than the classification task the network is performing. Motivated by work on segmentation (Chen et al., 2017a; Yu and Koltun, 2015; Chen et al., 2017b), the second operation collects statistics across different scales of the input using parallel pooling and dilated depth-wise convolution layers followed by bilinear resizing. Note the increase in parameters here is small due to the initial projection step and the use of depth-wise convolution. The next stage is a nonlinear low-dimensional bottleneck followed by nonlinear dimensionality expansion with convolution. This operation has similar flavor to the Squeeze-and-Excitation operation (Hu et al., 2018), and is included to give the power to learn useful embedding of the input. The last layer is a linear convolution that reduce the channels size to the spatial rank.
Appendix C Supplementary tables
CONVOLUTION (FC) is a convolution network with a fully connected last layer and without global average pooling.
CONVOLUTION (FC) is a convolution network with a fully connected last layer and without global average pooling.
CONVOLUTION (FC) is a convolution network with a fully connected last layer and without global average pooling.