Although the visual world is varied, it nevertheless has ubiquitous structure. Structured factors, such as scale, admit clear theories and efficient representation design. Unstructured factors, such as what makes a cat look like a cat, are too complicated to model analytically, requiring free-form representation learning. How can recognition harness structure without restraining the representation?
Free-form representations are structure-agnostic, making them general, but not exploiting structure is computationally and statistically inefficient. Structured representations like steerable filtering [11, 36, 15], scattering [2, 35], and steerable networks  are efficient but constrained to the chosen structures. We propose a new, semi-structured compositional filtering approach to blur the line between free-form and structured representations and learn both. Doing so learns local features and the degree of locality.
Free-form filters, directly defined by the parameters, are general and able to cope with unknown variations, but are parameter inefficient. Structured factors, such as scale and orientation, are enumerated like any other variation, and require duplicated learning across different layers and channels. Nonetheless, end-to-end learning of free-form parameters is commonly the most accurate approach to complex visual recognition tasks when there is sufficient data.
Structured filters, indirectly defined as a function of the parameters, are theoretically clear and parameter efficient, but constrained. Their effectiveness hinges on whether or not they encompass the true structure of the data. If not, the representation is limiting, and subject to error. At least, this is a danger when substituting structure to replace learning.
We compose free-form and structured filters, as shown in Figure 1, and learn both end-to-end. Free-form filters are not constrained by our composition. This makes our approach more expressive, not less, while still able to efficiently learn the chosen structured factors. In this way our semi-structured networks can reduce to existing networks as a special case. At the same time, our composition can learn different receptive fields that cannot be realized in the standard parameterization of free-form filters. Adding more free-form parameters or dilating cannot learn the same family of filters. Figure 2 offers one example of the impracticality of architectural alternatives.
Gaussian structure represents scale, aspect, and orientation through covariance . Optimizing these factors carries out a form of differentiable architecture search over receptive fields, reducing the need for onerous hand-design or expensive discrete search. Any 2D Gaussian has the same, low number of covariance parameters no matter its spatial extent, so receptive field optimization is low-dimensional and efficient. Because the Gaussian is smooth, our filtering is guaranteed to be proper from a signal processing perspective and avoid aliasing.
Our contributions include: (1) defining semi-structured compositional filtering to bridge classic ideas for scale-space representation design and current practices for representation learning, (2) exploring a variety of receptive fields that our approach can learn, and (3) adapting receptive fields with accurate and efficient dynamic Gaussian structure.
2 Related Work
Composing structured Gaussian filters with free-form learned filters draws on structured filter design and representation learning. Our work is inspired by the transformation invariance of scale-space , the parsimony of steerable filtering [11, 30, 2, 6], and the adaptivity of dynamic inference [28, 16, 9, 8]. Analysis that the effective receptive field size of deep networks is limited , and only is a fraction of the theoretical size, motivates our goal of making unbounded receptive field size and varied receptive field shapes practically learnable.
Transformation Invariance Gaussian scale-space and its affine extension connect covariance to spatial structure for transformation invariance . We jointly learn structured transformations via Gaussian covariance and features via free-form filtering. Enumerative methods cover a set of transformations, rather than learning to select transformations: image pyramids  and feature pyramids [17, 34, 22] cover scale, scattering  covers scales and rotations, and steerable networks  cover discrete groups. Our learning and inferring covariance relates to scale selection , as exemplified by the scale invariant feature transform . Scale-adaptive convolution  likewise selects scales, but without our Gaussian structure and smoothness.
Steering Steering indexes a continuous family of filters by linearly weighting a structured basis, such as Gaussian derivatives. Steerable filters  index orientation and deformable kernels  index orientation and scale. Such filters can be stacked into a deep, structured network . These methods have elegant structure, but are constrained to it. We make use of Gaussian structure, but keep generality by composing with free-form filters.
Dynamic Inference Dynamic inference adapts the model to each input. Dynamic routing , spatial transformers , dynamic filter nets , and deformable convolution  are all dynamic, but lack local structure. We incorporate Gaussian structure to improve efficiency while preserving accuracy.
Proper signal processing, by blurring when downsampling, improves the shift-equivariance of learned filtering 
. We reinforce these results with our experiments on blurred dilation, to complement their focus on blurred stride. While we likewise blur, and confirm the need for smoothing to prevent aliasing, our focus is on how to jointly learn and compose structured and free-form filters.
3 A Clear Review of Blurring
We introduce the elements of our chosen structured filters first, and then compose free-form filters with this structure in the next section. While the Gaussian and scale-space ideas here are classic, our end-to-end optimized composition and its use for receptive field learning are novel.
3.1 Gaussian Structure
The choice of structure determines the filter characteristics that can be represented and learned.
We choose Gaussian structure. For modeling, it is differentiable for end-to-end learning, low-dimensional for efficient optimization, and still expressive enough to represent a variety of shapes. For signal processing, it is smooth and admits efficient filtering. In particular, the Gaussian has these attractive properties for our purposes:
shift-invariance for convolutional filtering,
normalization to preserve input and gradient norms for stable optimization,
separability to reduce computation by replacing a 2D filter with two 1D filters,
and cascade smoothing from semi-group structure to decompose filtering into smaller, cumulative steps.
The Gaussian kernel in 2-D is
for input coordinates and covariance , a symmetric positive-definite matrix.
The structure of the Gaussian is controlled by its covariance
. Note that we are concerned with the spatial covariance, where the coordinates are considered as random variables, and not the covariance of the feature dimensions. Therefore the elements of the covariance matrix are, for the y, x coordinates and for their correlation. The standard, isotropic Gaussian has identity covariance . There is progressively richer structure in spherical, diagonal, and full covariance: Figure 3 illustrates these kinds and the scale, aspect, and orientation structure they represent.
Selecting the right spatial covariance yields invariance to a given spatial transformation. The standard Gaussian indexes scale-space, while the full covariance Gaussian indexes its affine extension . We leverage this transformation property of Gaussians to learn receptive field shape in Section 4.1 and dynamically adapt their structure for local spatially invariant filtering in Section 4.2.
From the Gaussian kernel we instantiate a Gaussian filter in the standard way: (1) evaluate the kernel at the coordinates of the filter coefficients and (2) renormalize by the sum to correct for this discretization. We decide the filter size according to the covariance by setting the half size in each dimension. This covers to include
of the true density no matter the covariance. (We found that higher coverage did not improve our results.) Our filters are always odd-sized to keep coordinates centered.
3.2 Covariance Parameterization & Optimization
The covariance is symmetric positive definite, requiring proper parameterization for unconstrained optimization. We choose the log-Cholesky parameterization  for iterative optimization because it is simple and quick to compute: for upper-triangular with positive diagonal. We keep the diagonal positive by storing its log, hence log-Cholesky, and exponentiating when forming . (See  for a primer on covariance parameterization.)
Here is an example for full covariance with elements , for the y, x coordinates and for their correlation:
Spherical and diagonal covariance are parameterized by fixing and tying/untying . Note that we overload notation and use interchangeably for the covariance matrix and its log-Cholesky parameters.
Our composition learns
by end-to-end optimization of structured parameters, not statistical estimation of empirical distributions. In this way the Gaussian is determined by the task loss, and not by input statistics, as is more common.
3.3 Learning to Blur
As a pedagogical example, consider the problem of optimizing covariance to reproduce an unknown blur. That is, given a reference image and a blurred version of it, which Gaussian filter causes this blur? Figure 4 shows such an optimization: from an identity-like initialization the covariance parameters quickly converge to the true Gaussian.
Given the full covariance parameterization, optimization controls scale, aspect, and orientation. Each degree of freedom can be seen across the iterates of this example. Had the true blur been simpler, for instance spherical, it could still be swiftly recovered in the full parameterization.
Notice how the size and shape of the filter vary over the course of optimization: this is only possible through structure. For a Gaussian filter, its covariance is the intrinsic structure, and its coefficients follow from it. The filter size and shape change while the dimension of the covariance itself is constant. Lacking structure, free-form parameterization couples the number of parameters and filter size, and so cannot search over size and shape in this fashion.
Special cases of the Gaussian are helpful for differentiable model search. (a) The identity is recovered by filtering with a delta as variance goes to zero. (b) A smoothed delta from small variance is a good initialization to make use of pre-training. (c) Global average pooling is recovered as variance goes to infinity. Each filter is normalized separately to highlight the relationship between points.
4 Semi-Structured Compositional Filtering
Our semi-structured composition factorizes the representation into spatial Gaussian receptive fields and free-form features. This composition is a novel approach to making receptive field shape differentiable, low-dimensional, and decoupled from the number of parameters. Our approach jointly learns the structured and free-form parameters while guaranteeing proper sampling for smooth signal processing. Purely free-form filters cannot learn shape and size in this way: shape is entangled in all the parameters and size is bounded by the number of parameters. Purely structured filters, restricted to Gaussians and their derivatives for instance, lack the generality of free-form filters. Our factorization into structured and free-form filters is efficient for the representation, optimization, and inference of receptive fields without sacrificing the generality of features.
Receptive field size is a key design choice in the architecture of fully convolutional networks for local prediction tasks . The problem of receptive field design is commonly encountered with each new architecture, dataset, or task. Optimizing our semi-structured filters is equivalent to differentiable architecture search over receptive field size and shape. By making this choice differentiable, we show that learning can adjust to changes in the architecture and data in Section 5.2. Trying candidate receptive fields by enumeration is expensive, whether by manual search or automated search [44, 18, 25]. Semi-structured composition helps relieve the effort and computational burden of architecture design by relaxing the receptive field from a discrete decision into a continuous optimization.
4.1 Composing with Convolution and Covariance
Our composition combines a free-form with a structured Gaussian . The computation of our composition reduces to convolution, and so it inherits the efficiency of aggressively tuned convolution implementations. Convolution is associative, so compositional filtering of an input can be decomposed into two steps of convolution by
This decomposition has computational advantages. The Gaussian step can be done by specialized filtering that harnesses separability, cascade smoothing, and other Gaussian structure. Memory can be spared by only keeping the covariance parameters and recreating the Gaussian filters as needed (which is quick, although it is a space-time tradeoff). Each compositional filter can always be explicitly formed by for visualization (see Figure 1) or other analysis.
Both and are differentiable for end-to-end learning.
How the composition is formed alters the effect of the Gaussian on the free-form filter. Composing by convolution with the Gaussian then the free-form filter has two effects: it shapes and blurs the filter. Composing by convolution with the Gaussian and resampling according to the covariance purely shapes the filter. That is, blurring and resampling first blurs with the Gaussian, and then warps the sampling points for the following filtering by the covariance. Either operation might have a role in representation learning, so we experiment with each in Table 2. In both cases the composed filter is dense, unlike a sparse filter from dilation.
When considering covariance optimization as differentiable receptive field search, there are special cases of the Gaussian that are useful for particular purposes. See Figure 5 for how the Gaussian can be reduced to the identity, initialized near the identity, or reduced to average pooling. The Gaussian includes the identity in the limit, so our models can recover a standard networks without our composition of structure. By initializing near the identity, we are able to augment pre-trained networks without interference, and let learning decide whether or not to make use of structure.
Blurring for Smooth Signal Processing Blurring (and resampling) by the covariance guarantees proper sampling for correct signal processing. It synchronizes the degree of smoothing and the sampling rate to avoid aliasing. Their combination can be interpreted as a smooth, circular extension of dilated convolution [4, 38] or as a smooth, affine restriction of deformable convolution . Figure 6 contrasts dilation with blurring & resampling. For a further perspective, note this combination is equivalent to downsampling/upsampling with a Gaussian before/after convolving with the free-form filter.
. We identify these artifacts as aliasing caused by the spatial sparsity of dilated filters. We fix this by smoothing with standard deviation proportional to the dilation rate. Smoothing when subsampling is a fundamental technique in signal processing to avoid aliasing, and the combination serves as a simple alternative to the careful re-engineering of dilated architectures. Improvements from blurring dilation are reported in Table 3.
Compound Gaussian Structure Gaussian filters have a special compositional structure we can exploit: cascade smoothing. Composing a Gaussian with a Gaussian is still Gaussian with covariance . This lets us efficiently assemble compound receptive fields made of multiple Gaussians. Center-surround  receptive fields, which boost contrast, can be realized by such a combination as Difference-of-Gaussian  (DoG) filters, which subtract a larger Gaussian from a smaller Gaussian. Our joint learning of their covariances tunes the contrastive context of the receptive field, extending  which learns contrastive filters with fixed receptive field sizes.
Design Choices Having defined our semi-structured composition, we cover the design choices involved in its application. As a convolutional composition, it can augment any convolution layer in the architecture. We focus on including our composition in late, deep layers to show the effect without much further processing. We add compositional filtering to the output and decoder layers of fully convolutional networks because the local tasks they address rely on the choice of receptive fields.
Having decided where to compose, we must decide how much structure to compose. There are degrees of structure, from minimal structure, where each layer or stage has only one shared Gaussian, to dynamic structure, where each receptive field has its own structure that varies with the input. In between there is channel structure, where each free-form filter has its own Gaussian shared across space, or multiple structure, where each layer or filter has multiple Gaussians to cover different shapes. We explore minimal structure and dynamic structure in order to examine the effect of composition for static and dynamic inference, and leave the other degrees of structure to future work.
4.2 Dynamic Gaussian Structure
Semi-structured composition learns a rich family of receptive fields, but visual structure is richer still, because structure locally varies while our filters are fixed. Even a single image contains variations in scale and orientation, so one-size-and-shape-fits-all structure is suboptimal. Dynamic inference replaces static, global parameters with dynamic, local parameters that are inferred from the input to adapt to these variations. Composing with structure by convolution cannot locally adapt, since the filters are constant across the image. We can nevertheless extend our composition to dynamic structure by representing local covariances and instantiating local Gaussians accordingly. Our composition makes dynamic inference efficient by decoupling low-dimensional, Gaussian structure from high-dimensional, free-form filters.
There are two routes to dynamic Gaussian structure: local filtering and deformable sampling. Local filtering has a different filter kernel for each position, as done by dynamic filter networks . This ensures exact filtering for dynamic Gaussians, but is too computationally demanding for large-scale recognition networks. Deformable sampling adjusts the position of filter taps by arbitrary offsets, as done by deformable convolution . We exploit deformable sampling to dynamically form sparse approximations of Gaussians.
We constrain deformable sampling to Gaussian structure by setting the sampling points through covariance. Figure 7 illustrates these Gaussian deformations. We relate the default deformation to the standard Gaussian by placing one point at the origin and circling it with a ring of eight points on the unit circle at equal distances and angles. We consider the same progression of spherical, diagonal, and full covariance for dynamic structure. This low-dimensional structure differs from the high degrees of freedom in a dynamic filter network, which sets free-form filter parameters, and deformable convolution, which sets free-form offsets. In this way our semi-structured composition requires only a small, constant number of covariance parameters independent of the sampling resolution and the kernel size , while deformable convolution has constant resolution and requires offset parameters for a filter.
To infer the local covariances, we follow the deformable approach , and learn a convolutional regressor for each dynamic filtering step. The regressor, which is simply a convolution layer, first infers the covariances which then determine the dynamic filtering that follows. The low-dimensional structure of our dynamic parameters makes this regression more efficient than free-form deformation, as it only has three outputs for each full covariance, or even just one for each spherical covariance. Since the covariance is differentiable, the regression is learned end-to-end from the task loss without further supervision.
We experiment with dynamic structure in Section 5.3.
We experiment with the local task of semantic segmentation, because our method learns the size and shape of local receptive fields. As a recognition task, semantic segmentation requires a balance between local scope, to infer where, and global scope, to infer what. Existing approaches must take care with receptive field design, and their experimental development takes significant model search.
Data CityScapes  is a challenging dataset of varied urban scenes from the perspective of a car-mounted camera. We follow the standard training and evaluation protocols and train/validation splits, with finely-annotated training images and validation images. We score results by the common intersection-over-union metric—the intersection of predicted and true pixels divided by their union then averaged over classes—on the validation set. We evaluate the network itself without post-processing, test-time augmentation, or other accessories to isolate the effect of receptive field learning.
Architecture and Optimization For backbones we choose strong fully convolutional networks derived from residual networks . The dilated residual net (DRN)  has high resolution and receptive field size through dilation. Deep layer aggregation (DLA)  fuses layers by hierarchical and iterative skip connections. We also define a ResNet-34 backbone as a simple architecture of the kind used for ablations and exploratory experiments. Together they are representative of common architectural patterns in state-of-the-art fully convolutional networks.
We train our models by stochastic gradient descent forepochs with momentum , batch size 16, and weight decay . Training follows the “poly” learning rate schedule [5, 43] with initial rate . The input images are cropped to and augmented by random scaling within , random rotation within 10 degrees, and random color distortions as in 
. We train with synchronized, in-place batch normalization. For fair comparison, we reproduce the DRN and DLA baselines in our same setting, which improves on their reported results.
The chosen DRN and DLA architectures are strong methods on their own, but they can be further equipped for learning global spatial transformations and local deformations. Spatial transformer networks and deformable convolution  learn dynamic global/local transformations respectively. Spatial transformers serve as a baseline for structure, because they are restricted to a parametric class of transformations. Deformable convolution serves as a baseline for local, dynamic inference without structure. For comparison in the static setting, we simplify both methods to instead learn static transformations.
Naturally, because our composition is carried out by convolution (for static inference), we compare to the baseline of including a free-form convolution layer on its own.
We will release code and reference models for our static and dynamic compositional filtering methods.
5.1 Learning Semi-Structured Filters
We first show that semi-structured compositional filtering improves the accuracy of strong fully convolutional networks. We then examine how to best implement our composition and confirm the value of smooth signal processing.
Augmenting Backbone Architectures Semi-structured filtering improves the accuracy of strong fully convolutional networks. We augment the last, output stage with a single instance of our composition and optimize end-to-end. See Table 1 for the accuracies of the backbones, baselines, and our filtering. Static composition by convolution improves on the backbone by 1-2 points, and dynamic composition boosts the improvement to 4 points (see Section 5.3).
Our simple composition improves on the accuracy of the static receptive fields learned by a spatial transformer and deformable convolution. Spatial transformers and our static composition each learn a global transformation, but our Gaussian parameterization is more effectively optimized. Deformable convolution learns local receptive fields, but its free-form parameterization takes more computation and memory. Our edition of DoG, which learns the surround size, improves the accuracy a further point.
Note that the backbones are agressively-tuned architectures which required significant model search and engineering effort. Our composition is still able to deliver improvement through learning without further engineering. In the next subsection, we show that joint optimization of our composition does effective model search when the chosen architecture is suboptimal.
|+ STN (static) ||70.5|
|+ Deformable (static) ||72.2|
|+ Composition (ours)||73.5|
|+ CCL ||73.1|
|+ DoG (ours)||74.1|
|+ Composition (ours)||78.2|
How to Compose As explained in Section 4.1, we can compose with a Gaussian structured filter by blurring alone or blurring and resampling. As either can be learned end-to-end, we experiment with both and report their accuracies in Table 2. From this comparison we choose blurring and resampling for the remainder of our experiments.
|+ DoG Blur||70.3|
|+ DoG Blur-Resample||71.4|
|w/ CCL ||73.1|
|w/ ASPP ||74.1|
Blurred Dilation To isolate the effect of blurring without learning, we smooth dilation with a blur proportional to the dilation rate. CCL  and ASPP  are carefully designed dilation architectures for context modeling, but neither blurs before dilating. Improvements from blurred dilation are reported in Table 3. Although the gains are small, this establishes that smoothing can help. This effect should only increase with dilation rate.
The small marginal effect of blurring without learning shows that most of our improvement is from joint optimization of our composition and dynamic inference.
5.2 Differentiable Receptive Field Search
Our composition makes local receptive fields differentiable in a low-dimensional, structured parameterization. This turns choosing receptive fields into a task for learning, instead of designing or manual searching. We demonstrate that this differentiable receptive field search is able to adjust for changes in the architecture and data. Table 4 shows how receptive field optimization counteracts the reduction of the architectural receptive field size and the enlargement of the input. These controlled experiments, while simple, reflect a realistic lack of knowledge in practice: for a new architecture or dataset, the right design is unknown.
For these experiments we include our composition in the last stage of the network and only optimize this stage. We do this to limit the scope of learning to the joint optimization of our composition, since then any effect is only attributable to the composition itself. We verify that end-to-end learning further improves results, but controlling for it in this way eliminates the possibility of confounding effects.
In the extreme, we can do structural fine-tuning by including our composition in a pre-trained network and only optimizing the covariance. When fine-tuning the structure alone, optimization either reduces the Gaussian to a delta, doing no harm, or slightly enlarges the receptive field, giving a one point boost. Therefore the special case of the identity, as explained in Figure 5, is learnable in practice. This shows that our composition helps or does no harm, and further supports the importance of jointly learning the composition as we do.
|Smaller Receptive Field|
|and Enlarged Input|
5.3 Dynamic Inference of Gaussian Structure
Learning the covariance optimizes receptive field size and shape. Dynamic inference of the covariance takes this a step further, and adaptively adjusts receptive fields to vary with the input. By locally regressing the covariance, our approach can better cope with factors of variation within an image, and do so efficiently through structure.
|+ Static Composition (ours)||-||73.5|
|+ Gauss. Deformation (ours)||✓||1||76.6|
|+ Free-form Deformation ||✓||76.6|
|+ Static Composition (ours)||-||68.1|
|+ Gauss. Deformation (ours)||✓||1||74.2|
|+ Free-form Deformation ||✓||75.1|
|+ Gauss. Deformation (ours)||✓||1||74.3|
|+ Free-form Deformation ||✓||73.6|
We compare our Gaussian deformation with free-form deformation in Table 5. Controlling deformable convolution by Gaussian structure improves efficiency while preserving accuracy to within one point. While free-form deformations are more general in principle, in practice there is a penalty in efficiency. Recall that the size of our structured parameterization is independent of the free-form filter size. On the other hand unstructured deformable convolution requires parameters for a filter.
Qualitative results for dynamic Gaussian structure are shown in Figure 8. The inferred local scales reflect scale structure in the input.
In these experiments we restrict the Gaussian to spherical covariance with a single degree of freedom for scale. Our results show that making scale dynamic through spherical covariance suffices to achieve essentially equal accuracy as general, free-form deformations. Including further degrees of freedom by diagonal and full covariance does not give further improvement on this task and data. As scale is perhaps the most ubiquitous transformation in the distribution of natural images, scale modeling might suffice to handle many variations.
Composing structured Gaussian and free-form filters makes receptive field size and shape differentiable for direct optimization. Through receptive field learning, our semi-structured models do by gradient optimization what current free-form models have done by discrete design. That is, in our parameterization changes in structured weights would require changes in free-form architecture.
Our method learns local receptive fields. While we have focused on locality in space, the principle is more general, and extends to locality in time and other dimensions.
Factorization of this sort points to a reconciliation of structure and learning, through which known structure is respected and unknown detail is learned freely.
-  J. Babaud, A. P. Witkin, M. Baudin, and R. O. Duda. Uniqueness of the gaussian kernel for scale-space filtering. TPAMI, 1986.
-  J. Bruna and S. Mallat. Invariant scattering convolution networks. TPAMI, 2013.
-  P. Burt and E. Adelson. The laplacian pyramid as a compact image code. Communications, IEEE Transactions on, 31(4):532–540, 1983.
-  L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected CRFs. In ICLR, 2015.
-  L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. TPAMI, 2018.
-  T. S. Cohen and M. Welling. Steerable cnns. In ICLR, 2017.
M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson,
U. Franke, S. Roth, and B. Schiele.
The cityscapes dataset for semantic urban scene understanding.In CVPR, 2016.
-  J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolutional networks. In ICCV, 2017.
-  B. De Brabandere, X. Jia, T. Tuytelaars, and L. Van Gool. Dynamic filter networks. In NIPS, 2016.
-  H. Ding, X. Jiang, B. Shuai, A. Q. Liu, and G. Wang. Context contrasted feature and gated multi-scale aggregation for scene segmentation. In CVPR, 2018.
-  W. T. Freeman and E. H. Adelson. The design and use of steerable filters. TPAMI, 1991.
-  K. Fukushima. Biological cybernetics, 36(4):193–202, 1980.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
-  A. G. Howard. Some improvements on deep convolutional neural network based image classification. arXiv preprint arXiv:1312.5402, 2013.
-  J.-H. Jacobsen, J. van Gemert, Z. Lou, and A. W. Smeulders. Structured Receptive Fields in CNNs. In CVPR, 2016.
-  M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu. Spatial transformer networks. In NIPS, 2015.
-  A. Kanazawa, A. Sharma, and D. Jacobs. Locally scale-invariant convolutional neural networks. arXiv preprint arXiv:1412.5104, 2014.
-  K. Kandasamy, W. Neiswanger, J. Schneider, B. Poczos, and E. P. Xing. Neural architecture search with bayesian optimisation and optimal transport. In NIPS, 2018.
-  J. J. Koenderink. The structure of images. Biological cybernetics, 50(5):363–370, 1984.
-  S. W. Kuffler. Discharge patterns and functional organization of mammalian retina. Journal of neurophysiology, 16(1):37–68, 1953.
-  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
-  T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In CVPR, 2017.
Scale-space theory in computer vision, volume 256. Springer Science & Business Media, 1994.
-  T. Lindeberg. Feature detection with automatic scale selection. International journal of computer vision, 30(2):79–116, 1998.
-  H. Liu, K. Simonyan, and Y. Yang. Darts: Differentiable architecture search. In ICLR, 2019.
-  D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004.
W. Luo, Y. Li, R. Urtasun, and R. Zemel.
Understanding the effective receptive field in deep convolutional neural networks.In NIPS, 2016.
-  B. A. Olshausen, C. H. Anderson, and D. C. Van Essen. A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. Journal of Neuroscience, 13(11):4700–4719, 1993.
-  A. V. Oppenheim and R. W. Schafer. Discrete-Time Signal Processing. Prentice Hall Press, Upper Saddle River, NJ, USA, 3rd edition, 2009.
-  P. Perona. Deformable kernels for early vision. TPAMI, 1995.
-  J. C. Pinheiro and D. M. Bates. Unconstrained parametrizations for variance-covariance matrices. Statistics and computing, 6(3):289–296, 1996.
-  R. W. Rodieck and J. Stone. Analysis of receptive fields of cat retinal ganglion cells. Journal of neurophysiology, 28(5):833–849, 1965.
-  S. Rota Bulò, L. Porzi, and P. Kontschieder. In-place activated batchnorm for memory-optimized training of dnns. In CVPR, 2018.
-  E. Shelhamer, J. Long, and T. Darrell. Fully convolutional networks for semantic segmentation. TPAMI, 2016.
-  L. Sifre and S. Mallat. Rotation, scaling and deformation invariant scattering for texture discrimination. In CVPR, 2013.
-  E. P. Simoncelli and W. T. Freeman. The steerable pyramid: A flexible architecture for multi-scale derivative computation. In ICIP, 1995.
-  P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell. Understanding convolution for semantic segmentation. In WACV, 2018.
-  F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016.
-  F. Yu, V. Koltun, and T. Funkhouser. Dilated residual networks. In CVPR, 2017.
-  F. Yu, D. Wang, E. Shelhamer, and T. Darrell. Deep layer aggregation. In CVPR, 2018.
-  R. Zhang. Making convolutional networks shift-invariant again, 2019.
-  R. Zhang, S. Tang, Y. Zhang, J. Li, and S. Yan. Scale-adaptive convolutions for scene parsing. In ICCV, pages 2031–2039, 2017.
-  H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In CVPR, 2017.
B. Zoph and Q. V. Le.
Neural architecture search with reinforcement learning.In ICLR, 2017.