1 Introduction
Although the visual world is varied, it nevertheless has ubiquitous structure. Structured factors, such as scale, admit clear theories and efficient representation design. Unstructured factors, such as what makes a cat look like a cat, are too complicated to model analytically, requiring freeform representation learning. How can recognition harness structure without restraining the representation?
Freeform representations are structureagnostic, making them general, but not exploiting structure is computationally and statistically inefficient. Structured representations like steerable filtering [11, 36, 15], scattering [2, 35], and steerable networks [6] are efficient but constrained to the chosen structures. We propose a new, semistructured compositional filtering approach to blur the line between freeform and structured representations and learn both. Doing so learns local features and the degree of locality.
Freeform filters, directly defined by the parameters, are general and able to cope with unknown variations, but are parameter inefficient. Structured factors, such as scale and orientation, are enumerated like any other variation, and require duplicated learning across different layers and channels. Nonetheless, endtoend learning of freeform parameters is commonly the most accurate approach to complex visual recognition tasks when there is sufficient data.
Structured filters, indirectly defined as a function of the parameters, are theoretically clear and parameter efficient, but constrained. Their effectiveness hinges on whether or not they encompass the true structure of the data. If not, the representation is limiting, and subject to error. At least, this is a danger when substituting structure to replace learning.
We compose freeform and structured filters, as shown in Figure 1, and learn both endtoend. Freeform filters are not constrained by our composition. This makes our approach more expressive, not less, while still able to efficiently learn the chosen structured factors. In this way our semistructured networks can reduce to existing networks as a special case. At the same time, our composition can learn different receptive fields that cannot be realized in the standard parameterization of freeform filters. Adding more freeform parameters or dilating cannot learn the same family of filters. Figure 2 offers one example of the impracticality of architectural alternatives.
Gaussian structure represents scale, aspect, and orientation through covariance [23]. Optimizing these factors carries out a form of differentiable architecture search over receptive fields, reducing the need for onerous handdesign or expensive discrete search. Any 2D Gaussian has the same, low number of covariance parameters no matter its spatial extent, so receptive field optimization is lowdimensional and efficient. Because the Gaussian is smooth, our filtering is guaranteed to be proper from a signal processing perspective and avoid aliasing.
Our contributions include: (1) defining semistructured compositional filtering to bridge classic ideas for scalespace representation design and current practices for representation learning, (2) exploring a variety of receptive fields that our approach can learn, and (3) adapting receptive fields with accurate and efficient dynamic Gaussian structure.
2 Related Work
Composing structured Gaussian filters with freeform learned filters draws on structured filter design and representation learning. Our work is inspired by the transformation invariance of scalespace [23], the parsimony of steerable filtering [11, 30, 2, 6], and the adaptivity of dynamic inference [28, 16, 9, 8]. Analysis that the effective receptive field size of deep networks is limited [27], and only is a fraction of the theoretical size, motivates our goal of making unbounded receptive field size and varied receptive field shapes practically learnable.
Transformation Invariance Gaussian scalespace and its affine extension connect covariance to spatial structure for transformation invariance [23]. We jointly learn structured transformations via Gaussian covariance and features via freeform filtering. Enumerative methods cover a set of transformations, rather than learning to select transformations: image pyramids [3] and feature pyramids [17, 34, 22] cover scale, scattering [2] covers scales and rotations, and steerable networks [6] cover discrete groups. Our learning and inferring covariance relates to scale selection [24], as exemplified by the scale invariant feature transform [26]. Scaleadaptive convolution [42] likewise selects scales, but without our Gaussian structure and smoothness.
Steering Steering indexes a continuous family of filters by linearly weighting a structured basis, such as Gaussian derivatives. Steerable filters [11] index orientation and deformable kernels [30] index orientation and scale. Such filters can be stacked into a deep, structured network [15]. These methods have elegant structure, but are constrained to it. We make use of Gaussian structure, but keep generality by composing with freeform filters.
Dynamic Inference Dynamic inference adapts the model to each input. Dynamic routing [28], spatial transformers [16], dynamic filter nets [9], and deformable convolution [8] are all dynamic, but lack local structure. We incorporate Gaussian structure to improve efficiency while preserving accuracy.
Proper signal processing, by blurring when downsampling, improves the shiftequivariance of learned filtering [41]
. We reinforce these results with our experiments on blurred dilation, to complement their focus on blurred stride. While we likewise blur, and confirm the need for smoothing to prevent aliasing, our focus is on how to jointly learn and compose structured and freeform filters.
3 A Clear Review of Blurring
We introduce the elements of our chosen structured filters first, and then compose freeform filters with this structure in the next section. While the Gaussian and scalespace ideas here are classic, our endtoend optimized composition and its use for receptive field learning are novel.
3.1 Gaussian Structure
The choice of structure determines the filter characteristics that can be represented and learned.
We choose Gaussian structure. For modeling, it is differentiable for endtoend learning, lowdimensional for efficient optimization, and still expressive enough to represent a variety of shapes. For signal processing, it is smooth and admits efficient filtering. In particular, the Gaussian has these attractive properties for our purposes:

shiftinvariance for convolutional filtering,

normalization to preserve input and gradient norms for stable optimization,

separability to reduce computation by replacing a 2D filter with two 1D filters,

and cascade smoothing from semigroup structure to decompose filtering into smaller, cumulative steps.
In fact, the Gaussian is the unique filter satisfying these and further scalespace axioms [19, 1, 23].
The Gaussian kernel in 2D is
(1) 
for input coordinates and covariance , a symmetric positivedefinite matrix.
The structure of the Gaussian is controlled by its covariance
. Note that we are concerned with the spatial covariance, where the coordinates are considered as random variables, and not the covariance of the feature dimensions. Therefore the elements of the covariance matrix are
, for the y, x coordinates and for their correlation. The standard, isotropic Gaussian has identity covariance . There is progressively richer structure in spherical, diagonal, and full covariance: Figure 3 illustrates these kinds and the scale, aspect, and orientation structure they represent.Selecting the right spatial covariance yields invariance to a given spatial transformation. The standard Gaussian indexes scalespace, while the full covariance Gaussian indexes its affine extension [23]. We leverage this transformation property of Gaussians to learn receptive field shape in Section 4.1 and dynamically adapt their structure for local spatially invariant filtering in Section 4.2.
From the Gaussian kernel we instantiate a Gaussian filter in the standard way: (1) evaluate the kernel at the coordinates of the filter coefficients and (2) renormalize by the sum to correct for this discretization. We decide the filter size according to the covariance by setting the half size in each dimension. This covers to include
of the true density no matter the covariance. (We found that higher coverage did not improve our results.) Our filters are always oddsized to keep coordinates centered.
3.2 Covariance Parameterization & Optimization
The covariance is symmetric positive definite, requiring proper parameterization for unconstrained optimization. We choose the logCholesky parameterization [31] for iterative optimization because it is simple and quick to compute: for uppertriangular with positive diagonal. We keep the diagonal positive by storing its log, hence logCholesky, and exponentiating when forming . (See [31] for a primer on covariance parameterization.)
Here is an example for full covariance with elements , for the y, x coordinates and for their correlation:
Spherical and diagonal covariance are parameterized by fixing and tying/untying . Note that we overload notation and use interchangeably for the covariance matrix and its logCholesky parameters.
Our composition learns
by endtoend optimization of structured parameters, not statistical estimation of empirical distributions. In this way the Gaussian is determined by the task loss, and not by input statistics, as is more common.
3.3 Learning to Blur
As a pedagogical example, consider the problem of optimizing covariance to reproduce an unknown blur. That is, given a reference image and a blurred version of it, which Gaussian filter causes this blur? Figure 4 shows such an optimization: from an identitylike initialization the covariance parameters quickly converge to the true Gaussian.
Given the full covariance parameterization, optimization controls scale, aspect, and orientation. Each degree of freedom can be seen across the iterates of this example. Had the true blur been simpler, for instance spherical, it could still be swiftly recovered in the full parameterization.
Notice how the size and shape of the filter vary over the course of optimization: this is only possible through structure. For a Gaussian filter, its covariance is the intrinsic structure, and its coefficients follow from it. The filter size and shape change while the dimension of the covariance itself is constant. Lacking structure, freeform parameterization couples the number of parameters and filter size, and so cannot search over size and shape in this fashion.
Special cases of the Gaussian are helpful for differentiable model search. (a) The identity is recovered by filtering with a delta as variance goes to zero. (b) A smoothed delta from small variance is a good initialization to make use of pretraining. (c) Global average pooling is recovered as variance goes to infinity. Each filter is normalized separately to highlight the relationship between points.
4 SemiStructured Compositional Filtering
Composition and backpropagation are the twin engines of deep learning
[12, 21]: composing learned linear operations with nonlinearities yields deep representations. Deep visual representations are made by composing convolutions to learn rich features and receptive fields, which characterize the spatial extent of the features. Although each filter might be small, and relatively simple, their composition can represent and learn large, complex receptive fields. For instance, a stack of two filters is effectively but with fewer degrees of freedom ( vs. ). Composition therefore induces factorization of the representation, and this factorization determines the generality and efficiency of the representation.Our semistructured composition factorizes the representation into spatial Gaussian receptive fields and freeform features. This composition is a novel approach to making receptive field shape differentiable, lowdimensional, and decoupled from the number of parameters. Our approach jointly learns the structured and freeform parameters while guaranteeing proper sampling for smooth signal processing. Purely freeform filters cannot learn shape and size in this way: shape is entangled in all the parameters and size is bounded by the number of parameters. Purely structured filters, restricted to Gaussians and their derivatives for instance, lack the generality of freeform filters. Our factorization into structured and freeform filters is efficient for the representation, optimization, and inference of receptive fields without sacrificing the generality of features.
Receptive field size is a key design choice in the architecture of fully convolutional networks for local prediction tasks [34]. The problem of receptive field design is commonly encountered with each new architecture, dataset, or task. Optimizing our semistructured filters is equivalent to differentiable architecture search over receptive field size and shape. By making this choice differentiable, we show that learning can adjust to changes in the architecture and data in Section 5.2. Trying candidate receptive fields by enumeration is expensive, whether by manual search or automated search [44, 18, 25]. Semistructured composition helps relieve the effort and computational burden of architecture design by relaxing the receptive field from a discrete decision into a continuous optimization.
4.1 Composing with Convolution and Covariance
Our composition combines a freeform with a structured Gaussian . The computation of our composition reduces to convolution, and so it inherits the efficiency of aggressively tuned convolution implementations. Convolution is associative, so compositional filtering of an input can be decomposed into two steps of convolution by
(2) 
This decomposition has computational advantages. The Gaussian step can be done by specialized filtering that harnesses separability, cascade smoothing, and other Gaussian structure. Memory can be spared by only keeping the covariance parameters and recreating the Gaussian filters as needed (which is quick, although it is a spacetime tradeoff). Each compositional filter can always be explicitly formed by for visualization (see Figure 1) or other analysis.
Both and are differentiable for endtoend learning.
How the composition is formed alters the effect of the Gaussian on the freeform filter. Composing by convolution with the Gaussian then the freeform filter has two effects: it shapes and blurs the filter. Composing by convolution with the Gaussian and resampling according to the covariance purely shapes the filter. That is, blurring and resampling first blurs with the Gaussian, and then warps the sampling points for the following filtering by the covariance. Either operation might have a role in representation learning, so we experiment with each in Table 2. In both cases the composed filter is dense, unlike a sparse filter from dilation.
When considering covariance optimization as differentiable receptive field search, there are special cases of the Gaussian that are useful for particular purposes. See Figure 5 for how the Gaussian can be reduced to the identity, initialized near the identity, or reduced to average pooling. The Gaussian includes the identity in the limit, so our models can recover a standard networks without our composition of structure. By initializing near the identity, we are able to augment pretrained networks without interference, and let learning decide whether or not to make use of structure.
Blurring for Smooth Signal Processing Blurring (and resampling) by the covariance guarantees proper sampling for correct signal processing. It synchronizes the degree of smoothing and the sampling rate to avoid aliasing. Their combination can be interpreted as a smooth, circular extension of dilated convolution [4, 38] or as a smooth, affine restriction of deformable convolution [8]. Figure 6 contrasts dilation with blurring & resampling. For a further perspective, note this combination is equivalent to downsampling/upsampling with a Gaussian before/after convolving with the freeform filter.
Even without learning the covariance, blurring can improve dilated architectures. Dilation is prone to gridding artifacts [39, 37]
. We identify these artifacts as aliasing caused by the spatial sparsity of dilated filters. We fix this by smoothing with standard deviation proportional to the dilation rate. Smoothing when subsampling is a fundamental technique in signal processing to avoid aliasing
[29], and the combination serves as a simple alternative to the careful reengineering of dilated architectures. Improvements from blurring dilation are reported in Table 3.Compound Gaussian Structure Gaussian filters have a special compositional structure we can exploit: cascade smoothing. Composing a Gaussian with a Gaussian is still Gaussian with covariance . This lets us efficiently assemble compound receptive fields made of multiple Gaussians. Centersurround [20] receptive fields, which boost contrast, can be realized by such a combination as DifferenceofGaussian [32] (DoG) filters, which subtract a larger Gaussian from a smaller Gaussian. Our joint learning of their covariances tunes the contrastive context of the receptive field, extending [10] which learns contrastive filters with fixed receptive field sizes.
Design Choices Having defined our semistructured composition, we cover the design choices involved in its application. As a convolutional composition, it can augment any convolution layer in the architecture. We focus on including our composition in late, deep layers to show the effect without much further processing. We add compositional filtering to the output and decoder layers of fully convolutional networks because the local tasks they address rely on the choice of receptive fields.
Having decided where to compose, we must decide how much structure to compose. There are degrees of structure, from minimal structure, where each layer or stage has only one shared Gaussian, to dynamic structure, where each receptive field has its own structure that varies with the input. In between there is channel structure, where each freeform filter has its own Gaussian shared across space, or multiple structure, where each layer or filter has multiple Gaussians to cover different shapes. We explore minimal structure and dynamic structure in order to examine the effect of composition for static and dynamic inference, and leave the other degrees of structure to future work.
4.2 Dynamic Gaussian Structure
Semistructured composition learns a rich family of receptive fields, but visual structure is richer still, because structure locally varies while our filters are fixed. Even a single image contains variations in scale and orientation, so onesizeandshapefitsall structure is suboptimal. Dynamic inference replaces static, global parameters with dynamic, local parameters that are inferred from the input to adapt to these variations. Composing with structure by convolution cannot locally adapt, since the filters are constant across the image. We can nevertheless extend our composition to dynamic structure by representing local covariances and instantiating local Gaussians accordingly. Our composition makes dynamic inference efficient by decoupling lowdimensional, Gaussian structure from highdimensional, freeform filters.
There are two routes to dynamic Gaussian structure: local filtering and deformable sampling. Local filtering has a different filter kernel for each position, as done by dynamic filter networks [9]. This ensures exact filtering for dynamic Gaussians, but is too computationally demanding for largescale recognition networks. Deformable sampling adjusts the position of filter taps by arbitrary offsets, as done by deformable convolution [8]. We exploit deformable sampling to dynamically form sparse approximations of Gaussians.
We constrain deformable sampling to Gaussian structure by setting the sampling points through covariance. Figure 7 illustrates these Gaussian deformations. We relate the default deformation to the standard Gaussian by placing one point at the origin and circling it with a ring of eight points on the unit circle at equal distances and angles. We consider the same progression of spherical, diagonal, and full covariance for dynamic structure. This lowdimensional structure differs from the high degrees of freedom in a dynamic filter network, which sets freeform filter parameters, and deformable convolution, which sets freeform offsets. In this way our semistructured composition requires only a small, constant number of covariance parameters independent of the sampling resolution and the kernel size , while deformable convolution has constant resolution and requires offset parameters for a filter.
To infer the local covariances, we follow the deformable approach [8], and learn a convolutional regressor for each dynamic filtering step. The regressor, which is simply a convolution layer, first infers the covariances which then determine the dynamic filtering that follows. The lowdimensional structure of our dynamic parameters makes this regression more efficient than freeform deformation, as it only has three outputs for each full covariance, or even just one for each spherical covariance. Since the covariance is differentiable, the regression is learned endtoend from the task loss without further supervision.
We experiment with dynamic structure in Section 5.3.
5 Experiments
We experiment with the local task of semantic segmentation, because our method learns the size and shape of local receptive fields. As a recognition task, semantic segmentation requires a balance between local scope, to infer where, and global scope, to infer what. Existing approaches must take care with receptive field design, and their experimental development takes significant model search.
Data CityScapes [7] is a challenging dataset of varied urban scenes from the perspective of a carmounted camera. We follow the standard training and evaluation protocols and train/validation splits, with finelyannotated training images and validation images. We score results by the common intersectionoverunion metric—the intersection of predicted and true pixels divided by their union then averaged over classes—on the validation set. We evaluate the network itself without postprocessing, testtime augmentation, or other accessories to isolate the effect of receptive field learning.
Architecture and Optimization For backbones we choose strong fully convolutional networks derived from residual networks [13]. The dilated residual net (DRN) [39] has high resolution and receptive field size through dilation. Deep layer aggregation (DLA) [40] fuses layers by hierarchical and iterative skip connections. We also define a ResNet34 backbone as a simple architecture of the kind used for ablations and exploratory experiments. Together they are representative of common architectural patterns in stateoftheart fully convolutional networks.
We train our models by stochastic gradient descent for
epochs with momentum , batch size 16, and weight decay . Training follows the “poly” learning rate schedule [5, 43] with initial rate . The input images are cropped to and augmented by random scaling within , random rotation within 10 degrees, and random color distortions as in [14]. We train with synchronized, inplace batch normalization
[33]. For fair comparison, we reproduce the DRN and DLA baselines in our same setting, which improves on their reported results.Baselines
The chosen DRN and DLA architectures are strong methods on their own, but they can be further equipped for learning global spatial transformations and local deformations. Spatial transformer networks
[16] and deformable convolution [8] learn dynamic global/local transformations respectively. Spatial transformers serve as a baseline for structure, because they are restricted to a parametric class of transformations. Deformable convolution serves as a baseline for local, dynamic inference without structure. For comparison in the static setting, we simplify both methods to instead learn static transformations.Naturally, because our composition is carried out by convolution (for static inference), we compare to the baseline of including a freeform convolution layer on its own.
We will release code and reference models for our static and dynamic compositional filtering methods.
5.1 Learning SemiStructured Filters
We first show that semistructured compositional filtering improves the accuracy of strong fully convolutional networks. We then examine how to best implement our composition and confirm the value of smooth signal processing.
Augmenting Backbone Architectures Semistructured filtering improves the accuracy of strong fully convolutional networks. We augment the last, output stage with a single instance of our composition and optimize endtoend. See Table 1 for the accuracies of the backbones, baselines, and our filtering. Static composition by convolution improves on the backbone by 12 points, and dynamic composition boosts the improvement to 4 points (see Section 5.3).
Our simple composition improves on the accuracy of the static receptive fields learned by a spatial transformer and deformable convolution. Spatial transformers and our static composition each learn a global transformation, but our Gaussian parameterization is more effectively optimized. Deformable convolution learns local receptive fields, but its freeform parameterization takes more computation and memory. Our edition of DoG, which learns the surround size, improves the accuracy a further point.
Note that the backbones are agressivelytuned architectures which required significant model search and engineering effort. Our composition is still able to deliver improvement through learning without further engineering. In the next subsection, we show that joint optimization of our composition does effective model search when the chosen architecture is suboptimal.
method  IU 

DRNA [39]  72.4 
+ Conv.  72.9 
+ STN (static) [16]  70.5 
+ Deformable (static) [8]  72.2 
+ Composition (ours)  73.5 
+ CCL [10]  73.1 
+ DoG (ours)  74.1 
DLA34 [40]  76.1 
+ Composition (ours)  78.2 
How to Compose As explained in Section 4.1, we can compose with a Gaussian structured filter by blurring alone or blurring and resampling. As either can be learned endtoend, we experiment with both and report their accuracies in Table 2. From this comparison we choose blurring and resampling for the remainder of our experiments.
method  IU 

ResNet34  64.8 
+ Blur  66.3 
+ BlurResample  68.1 
+ DoG Blur  70.3 
+ DoG BlurResample  71.4 
DRNA [39]  72.4 
+ Blur  72.2 
+ BlurResample  73.5 
method  IU 

DRNA [39]  72.4 
w/ CCL [10]  73.1 
+ Blur  74.0 
w/ ASPP [5]  74.1 
+ Blur  74.3 
Blurred Dilation To isolate the effect of blurring without learning, we smooth dilation with a blur proportional to the dilation rate. CCL [10] and ASPP [5] are carefully designed dilation architectures for context modeling, but neither blurs before dilating. Improvements from blurred dilation are reported in Table 3. Although the gains are small, this establishes that smoothing can help. This effect should only increase with dilation rate.
The small marginal effect of blurring without learning shows that most of our improvement is from joint optimization of our composition and dynamic inference.
5.2 Differentiable Receptive Field Search
Our composition makes local receptive fields differentiable in a lowdimensional, structured parameterization. This turns choosing receptive fields into a task for learning, instead of designing or manual searching. We demonstrate that this differentiable receptive field search is able to adjust for changes in the architecture and data. Table 4 shows how receptive field optimization counteracts the reduction of the architectural receptive field size and the enlargement of the input. These controlled experiments, while simple, reflect a realistic lack of knowledge in practice: for a new architecture or dataset, the right design is unknown.
For these experiments we include our composition in the last stage of the network and only optimize this stage. We do this to limit the scope of learning to the joint optimization of our composition, since then any effect is only attributable to the composition itself. We verify that endtoend learning further improves results, but controlling for it in this way eliminates the possibility of confounding effects.
In the extreme, we can do structural finetuning by including our composition in a pretrained network and only optimizing the covariance. When finetuning the structure alone, optimization either reduces the Gaussian to a delta, doing no harm, or slightly enlarges the receptive field, giving a one point boost. Therefore the special case of the identity, as explained in Figure 5, is learnable in practice. This shows that our composition helps or does no harm, and further supports the importance of jointly learning the composition as we do.
method  no. params  epoch  IU  

DRNA [39]  many  240  72.4  0 
Smaller Receptive Field  
ResNet34  many  240  64.8  7.6 
+ Conv.  some  +20  65.8  6.6 
+ Composition  some  +20  68.1  4.6 
+ DoG  some  +20  68.9  3.5 
…EndtoEnd  many  240  71.4  0.8 
and Enlarged Input  
ResNet34  many  240  56.2  16.2 
+ Conv.  some  +20  56.7  15.7 
+ Composition  some  +20  57.8  14.6 
+ DoG  some  +20  62.7  9.7 
…EndtoEnd  many  240  66.5  5.9 
5.3 Dynamic Inference of Gaussian Structure
Learning the covariance optimizes receptive field size and shape. Dynamic inference of the covariance takes this a step further, and adaptively adjusts receptive fields to vary with the input. By locally regressing the covariance, our approach can better cope with factors of variation within an image, and do so efficiently through structure.
Cityscapes Validation  

method  dyn.?  5emno. dyn.  
params  IU  
DRNA [39]    72.4  
+ Static Composition (ours)    73.5  
+ Gauss. Deformation (ours)  ✓  1  76.6 
+ Freeform Deformation [8]  ✓  76.6  
ResNet34    64.8  
+ Static Composition (ours)    68.1  
+ Gauss. Deformation (ours)  ✓  1  74.2 
+ Freeform Deformation [8]  ✓  75.1  
Cityscapes Test  
DRNA [39]    71.2  
+ Gauss. Deformation (ours)  ✓  1  74.3 
+ Freeform Deformation [8]  ✓  73.6 
We compare our Gaussian deformation with freeform deformation in Table 5. Controlling deformable convolution by Gaussian structure improves efficiency while preserving accuracy to within one point. While freeform deformations are more general in principle, in practice there is a penalty in efficiency. Recall that the size of our structured parameterization is independent of the freeform filter size. On the other hand unstructured deformable convolution requires parameters for a filter.
Qualitative results for dynamic Gaussian structure are shown in Figure 8. The inferred local scales reflect scale structure in the input.
In these experiments we restrict the Gaussian to spherical covariance with a single degree of freedom for scale. Our results show that making scale dynamic through spherical covariance suffices to achieve essentially equal accuracy as general, freeform deformations. Including further degrees of freedom by diagonal and full covariance does not give further improvement on this task and data. As scale is perhaps the most ubiquitous transformation in the distribution of natural images, scale modeling might suffice to handle many variations.
6 Conclusion
Composing structured Gaussian and freeform filters makes receptive field size and shape differentiable for direct optimization. Through receptive field learning, our semistructured models do by gradient optimization what current freeform models have done by discrete design. That is, in our parameterization changes in structured weights would require changes in freeform architecture.
Our method learns local receptive fields. While we have focused on locality in space, the principle is more general, and extends to locality in time and other dimensions.
Factorization of this sort points to a reconciliation of structure and learning, through which known structure is respected and unknown detail is learned freely.
References
 [1] J. Babaud, A. P. Witkin, M. Baudin, and R. O. Duda. Uniqueness of the gaussian kernel for scalespace filtering. TPAMI, 1986.
 [2] J. Bruna and S. Mallat. Invariant scattering convolution networks. TPAMI, 2013.
 [3] P. Burt and E. Adelson. The laplacian pyramid as a compact image code. Communications, IEEE Transactions on, 31(4):532–540, 1983.
 [4] L.C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected CRFs. In ICLR, 2015.
 [5] L.C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. TPAMI, 2018.
 [6] T. S. Cohen and M. Welling. Steerable cnns. In ICLR, 2017.

[7]
M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson,
U. Franke, S. Roth, and B. Schiele.
The cityscapes dataset for semantic urban scene understanding.
In CVPR, 2016.  [8] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolutional networks. In ICCV, 2017.
 [9] B. De Brabandere, X. Jia, T. Tuytelaars, and L. Van Gool. Dynamic filter networks. In NIPS, 2016.
 [10] H. Ding, X. Jiang, B. Shuai, A. Q. Liu, and G. Wang. Context contrasted feature and gated multiscale aggregation for scene segmentation. In CVPR, 2018.
 [11] W. T. Freeman and E. H. Adelson. The design and use of steerable filters. TPAMI, 1991.

[12]
K. Fukushima.
Neocognitron: A selforganizing neural network model for a mechanism of pattern recognition unaffected by shift in position.
Biological cybernetics, 36(4):193–202, 1980.  [13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
 [14] A. G. Howard. Some improvements on deep convolutional neural network based image classification. arXiv preprint arXiv:1312.5402, 2013.
 [15] J.H. Jacobsen, J. van Gemert, Z. Lou, and A. W. Smeulders. Structured Receptive Fields in CNNs. In CVPR, 2016.
 [16] M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu. Spatial transformer networks. In NIPS, 2015.
 [17] A. Kanazawa, A. Sharma, and D. Jacobs. Locally scaleinvariant convolutional neural networks. arXiv preprint arXiv:1412.5104, 2014.
 [18] K. Kandasamy, W. Neiswanger, J. Schneider, B. Poczos, and E. P. Xing. Neural architecture search with bayesian optimisation and optimal transport. In NIPS, 2018.
 [19] J. J. Koenderink. The structure of images. Biological cybernetics, 50(5):363–370, 1984.
 [20] S. W. Kuffler. Discharge patterns and functional organization of mammalian retina. Journal of neurophysiology, 16(1):37–68, 1953.
 [21] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 [22] T.Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In CVPR, 2017.

[23]
T. Lindeberg.
Scalespace theory in computer vision
, volume 256. Springer Science & Business Media, 1994.  [24] T. Lindeberg. Feature detection with automatic scale selection. International journal of computer vision, 30(2):79–116, 1998.
 [25] H. Liu, K. Simonyan, and Y. Yang. Darts: Differentiable architecture search. In ICLR, 2019.
 [26] D. Lowe. Distinctive image features from scaleinvariant keypoints. IJCV, 2004.

[27]
W. Luo, Y. Li, R. Urtasun, and R. Zemel.
Understanding the effective receptive field in deep convolutional neural networks.
In NIPS, 2016.  [28] B. A. Olshausen, C. H. Anderson, and D. C. Van Essen. A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. Journal of Neuroscience, 13(11):4700–4719, 1993.
 [29] A. V. Oppenheim and R. W. Schafer. DiscreteTime Signal Processing. Prentice Hall Press, Upper Saddle River, NJ, USA, 3rd edition, 2009.
 [30] P. Perona. Deformable kernels for early vision. TPAMI, 1995.
 [31] J. C. Pinheiro and D. M. Bates. Unconstrained parametrizations for variancecovariance matrices. Statistics and computing, 6(3):289–296, 1996.
 [32] R. W. Rodieck and J. Stone. Analysis of receptive fields of cat retinal ganglion cells. Journal of neurophysiology, 28(5):833–849, 1965.
 [33] S. Rota Bulò, L. Porzi, and P. Kontschieder. Inplace activated batchnorm for memoryoptimized training of dnns. In CVPR, 2018.
 [34] E. Shelhamer, J. Long, and T. Darrell. Fully convolutional networks for semantic segmentation. TPAMI, 2016.
 [35] L. Sifre and S. Mallat. Rotation, scaling and deformation invariant scattering for texture discrimination. In CVPR, 2013.
 [36] E. P. Simoncelli and W. T. Freeman. The steerable pyramid: A flexible architecture for multiscale derivative computation. In ICIP, 1995.
 [37] P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell. Understanding convolution for semantic segmentation. In WACV, 2018.
 [38] F. Yu and V. Koltun. Multiscale context aggregation by dilated convolutions. In ICLR, 2016.
 [39] F. Yu, V. Koltun, and T. Funkhouser. Dilated residual networks. In CVPR, 2017.
 [40] F. Yu, D. Wang, E. Shelhamer, and T. Darrell. Deep layer aggregation. In CVPR, 2018.
 [41] R. Zhang. Making convolutional networks shiftinvariant again, 2019.
 [42] R. Zhang, S. Tang, Y. Zhang, J. Li, and S. Yan. Scaleadaptive convolutions for scene parsing. In ICCV, pages 2031–2039, 2017.
 [43] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In CVPR, 2017.

[44]
B. Zoph and Q. V. Le.
Neural architecture search with reinforcement learning.
In ICLR, 2017.
Comments
There are no comments yet.