An Empirical Evaluation of Current Convolutional Architectures' Ability to Manage Nuisance Location and Scale Variability

05/26/2015 ∙ by Nikolaos Karianakis, et al. ∙ 0

We conduct an empirical study to test the ability of Convolutional Neural Networks (CNNs) to reduce the effects of nuisance transformations of the input data, such as location, scale and aspect ratio. We isolate factors by adopting a common convolutional architecture either deployed globally on the image to compute class posterior distributions, or restricted locally to compute class conditional distributions given location, scale and aspect ratios of bounding boxes determined by proposal heuristics. In theory, averaging the latter should yield inferior performance compared to proper marginalization. Yet empirical evidence suggests the converse, leading us to conclude that - at the current level of complexity of convolutional architectures and scale of the data sets used to train them - CNNs are not very effective at marginalizing nuisance variability. We also quantify the effects of context on the overall classification task and its impact on the performance of CNNs, and propose improved sampling techniques for heuristic proposal schemes that improve end-to-end performance to state-of-the-art levels. We test our hypothesis on a classification task using the ImageNet Challenge benchmark and on a wide-baseline matching task using the Oxford and Fischer's datasets.



There are no comments yet.


page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Convolutional neural networks (CNNs) are the de-facto paragon for detecting the presence of objects in a scene, as portrayed by an image. CNNs are described as being “approximately invariant” to nuisance transformations such as planar translation, both by virtue of their architecture (the same operation is repeated at every location akin to a “sliding window” and is followed by local pooling) and by virtue of their approximation properties that, given sufficient parameters and transformed training data, could in principle yield discriminants that are insensitive to nuisance transformations of the data represented in the training set. In addition to planar translation, an object detector must manage variability due to scaling (possibly anisotropic along the coordinate axes, yielding different aspect ratios) and (partial) occlusion. Some nuisances are elements of a transformation group, e.g., the (anisotropic) location-scale group for the case of position, scale and aspect ratio of the object’s support.111The region of the image the objects projects onto, often approximated by a bounding box.

The fact that convolutional architectures appear effective in classifying images as containing a given object regardless of its position, scale, and aspect ratio

[28, 40] suggests that the network can effectively manage such nuisance variability.

However, the quest for top performance in benchmark datasets has led researchers away from letting the CNN manage all nuisance variability. Instead, the image is first pre-processed to yield proposals, which are subsets of the image domain (bounding boxes) to be tested for the presence of a given class (Regions-with-CNN [19]). Proposal mechanisms aim to remove nuisance variability due to position, scale and aspect ratio, leaving a “Category CNN” to classify the resulting bounding box as one of a number of classes it is trained with. Put differently, rather than computing the posterior distribution222One can think of the conditional distribution of a class given an image , , as defined by a CNN, as the class posterior marginalized with respect to the nuisance group . If the nuisances are known, one can use the class-conditionals at each nuisance in order to approximate with a weighted average of conditionals, i.e., . When a CNN is tested on a proposal determined by a reference frame , it computes ( restricted to ), which is an approximation of . Then, explicit marginalization (assuming uniform weights) computes which is different from which in turn is different from . This approach is therefore, on average, a lower bound on proper marginalization, and the fact that it would outperform the direct computation of is worth investigating empirically. with nuisance transformations automatically marginalized, the CNN is used to compute the conditional distribution of classes given the data and a sample element that approximates the nuisance transformation, represented by a bounding box. If the goal is the nuisance itself (object support, as in detection [10]) it can be found via maximum-likelihood (max-out

) by selecting the bounding box that yields the highest probability of any class

[19, 22]. If the goal is the class regardless of the transformation (as in categorization [10]), the nuisance can be approximately marginalized out

by averaging the conditional distributions with respect to an estimation of the nuisance transformations


Now, if a CNN was an effective way of computing the marginals with respect to nuisance variability, there would be no benefit in conditioning and averaging with respect to (inferred) nuisance samples. This is a direct corollary of the Data Processing Inequality (DPI, Theorem 2.8.1 in [9]). Proposals are subsets of the whole image, so in theory less informative even after accounting for resolution/sampling artifacts (Fig. 1). A fortiori, performance should further decrease if the conditioning mechanism is not very representative of the nuisance distribution, as is the case for most proposal schemes that produce bounding boxes based on adaptively downsampling a coarse discretization of the location-scale group [24]. Class posteriors conditioned on such bounding boxes discard the image outside it, further limiting the ability of the network to leverage on side information, or “context”. Should the converse be true, i.e., should averaging conditional distributions restricted to proposal regions outperform a CNN operating on the entire image, that would bring into question the ability of a CNN to marginalize nuisances such as translation and scaling or else go against the DPI. In this paper we test this hypothesis, aiming to answer to the question: How effective are current CNNs to reduce the effects of nuisance transformations of the input data, such as location and scaling?

To the best of our knowledge, this has never been done in the literature, despite the keen interest in understanding the properties of CNNs [20, 21, 34, 39, 43, 46, 47] following their empirical success. We are cognizant of the dangers of drawing sure conclusions from empirical evaluations, especially when they involve a myriad of parameters and exploit training sets that can exhibit biases. To this end, in Sect. 2 we describe a testing protocol that uses recognized existing modules, and keep all factors constant while testing each hypothesis.

1.1 Contributions

We first show that a baseline (AlexNet [28]) with single-model top-5 error of on ImageNet 2014 Classification slightly decreases in performance (to ) when constrained to the ground-truth bounding boxes (Table 1). This may seem surprising at first, as it would appear to violate Theorem 2.6.5 of [9] (on average, conditioning on the true value of the nuisance transformation must reduce uncertainty in the classifier). However, note that the restriction to bounding boxes does not just condition on the location-scale group, but also on visibility, as the image outside the bounding box is ignored. Thus, the slight decrease in performance measures the loss from discarding context by ignoring the image beyond the bounding box.

When we pad the true bounding boxes with a 10-pixel rim, we show that, conditioned on such “ground-truth-with-context” indeed does decrease the error as expected, to

. In Fig. 1 we show the classification performance as a function of the rim size all the way to the whole image for AlexNet and VGG16 [40]. A rim yields the lowest top-5 errors on the ImageNet validation set for both models. This also indicates that the context effectively leveraged by current CNN architectures is limited to a relatively small neighborhood of the object of interest.

The second contribution concerns the proper sampling of the nuisance group. If we interpret the CNN restricted to a bounding box as a function that maps samples of the location-scale group to class-conditional distributions, where the proposal mechanism down-samples the group, then classical sampling theory [38] teaches that we should retain not the value of the function at the samples, but its local average, a process known as anti-aliasing. Also in Table 1, we show that simple uniform averaging of 4 and 8 samples of the isotropic scale group (leaving location and aspect ratio constant) reduces the error to and respectively. This is again unintuitive, as one expects that averaging conditional densities would produce less discriminative classifiers, but in line with recent developments concerning “domain-size pooling” [12].

Method AlexNet VGG16
Whole image 19.96 13.24
Ground-Truth Bounding Box (GT) 20.41 12.44
Isotropically Anisotropically Isotropically Anisotropically
GT padded with 10 px 17.66 17.65 10.91 10.30
Ave-GT, 4 domain sizes (padded with [0,30] px) 15.96 16.00 9.65 8.90
Ave-GT, 8 domain sizes (padded with [0,70] px) 14.43 14.22 8.66 7.84
Table 1: AlexNet’s and VGG16’s top-5 error on the ImageNet 2014 classification challenge when the ground-truth localization is provided, compared to applying the model on the entire image. We pad the ground truth with various rim sizes both isotropically and anisotropically. Then we show how averaging the class posteriors performs when applying the network on concentric domain sizes around the ground truth.

To test the effect of such anti-aliasing on a CNN absent the knowledge of ground truth object location, we follow the methodology and evaluation protocol of [16] to develop a domain-size pooled CNN and test it in their benchmark classification of wide-baseline correspondence of regions selected by a generic low-level detector (MSER [32]). Our third contribution is to show that this procedure improves the baseline CNN by mean AP on standard benchmark datasets (Table 3 and Fig. 5 in Sect. 2.2).

Our fourth contribution goes towards answering the question set forth in the preamble: We consider two popular baselines (AlexNet and VGG16) that perform at the state-of-the-art in the ImageNet Classification challenge and introduce novel sampling and pruning methods, as well as an adaptively weighted marginalization based on the inverse Rényi entropy. Now, if averaging the conditional class posteriors obtained with various sampling schemes should improve overall performance, that would imply that the implicit “marginalization” performed by the CNN is inferior to that obtained by sampling the group, and averaging the resulting class conditionals. This is indeed our observation, e.g., for VGG16, as we achieve an overall performance of , compared to when using the whole image (Table 2). There are, however, caveats to this answer, which we discuss in Sect. 3.

Our fifth contribution is to actually provide a method that performs at the state of the art in the ImageNet Classification challenge when using a single model. In Table 2 we provide various results and time complexity. We achieve a top-5 classification error of and for AlexNet and VGG16, compared to and error when they are tested with regularly sampled crops [40], which corresponds to and relative error reduction, respectively. Data augmentation techniques such as scale jittering and an ensemble of several models [23, 40, 42] could be deployed along with our method.

The source code implementing our method and the scripts necessary to reproduce the evaluation are available at

1.2 Related work

The literature on CNNs and their role in Computer Vision is rapidly evolving. Attempts to understand the inner workings of CNNs are being conducted

[6, 20, 21, 29, 34, 39, 43, 46, 47], along with theoretical analysis [2, 4, 8, 41] aimed at characterizing their representational properties. Such intense interest was sparked by the surprising performance of CNNs [6, 11, 19, 23, 28, 36, 37, 40, 42] in Computer Vision benchmarks [10, 15], where many couple a proposal scheme [1, 5, 7, 14, 24, 25, 27, 31, 35, 44, 48] with a CNN. As our work relates to a vast body of work, we refer the reader to references in the papers that describe the benchmarks we adopt, namely [6], [28] and [40].

Bilen et. al. [3] also explore the idea of introducing proposals in classification. However, their approach leverages on a significantly larger number of candidates and focuses on sophisticated classifiers and post-normalization of class posteriors. Our investigation targets selecting a very small subset of the most discriminative candidates among generic object proposals, while building on popular CNN models.

2 Experiments

2.1 Large-scale Image Classification

What if we trivialize location and scaling?

First, we test the hypothesis that eliminating the nuisances of location and scaling by providing a bounding box for the object of interest will improve the classification accuracy. This is not a given, for restricting the network to operate on a bounding box prevents it from leveraging on context outside it. We use the AlexNet and VGG16 pretrained models, which are provided with the MatConvNet open source library [45], and test their top-1 and top-5 classification errors on the ImageNet 2014 classification challenge [10]. The validation set consists of images, where at each of them one “salient” class is annotated a priori by a human. However, other ImageNet classes appear in many of the images, which can confound any classifier.

We test the classifier in various settings (Table 1); first, by feeding the entire image to it and letting the classifier manage the nuisances. Then we test the ground-truth annotated bounding box and concentric regions that include it. We try both isotropic and anisotropic expansion of the ground-truth region. We observe similar behavior, which is also consistent for both models.

Only for AlexNet at Table 1 using the object’s ground-truth support performs slightly worse than using the whole image. After we pad the object region with a -pixel rim, the top-5 classification error decreases fast. However, there is a trade-off between context and clutter. Providing too much context has diminishing returns. In Fig. 1 we show how the errors vary as a function of the rim size around the object of interest. Performance starts dropping down when we add more than rim size. This padding gives and top-5 error for AlexNet and VGG16, as opposed to and respectively, when classifying the whole image.

Figure 1: The top-1 and top-5 classification errors in ImageNet 2014 as a function of the rim size for AlexNet (above) and VGG16 (below) architecture. A rim size corresponds to the ground-truth bounding box, while refers to the whole image. A relatively small rim around the ground truth provides the best trade-off between informative context and clutter.

To ensure that this improvement is not due to downsampling, we repeat the experiment with fixed resolution for the whole image and every subregion. We achieve this by shrinking each region with the same downsampling factor that we apply to the whole image to pass to the CNN. Finally we rescale the downsampled region to the CNN input. These results appear with the label “same resolution” in Fig. 1.

Finally, we apply domain size average pooling on the class posterior (i.e., the network’s softmax output layer) with and domain sizes that are concentric with the ground truth. The added rim has the declared size either at both dimensions (for the anisotropic case) or only along the minimum dimension (for the isotropic case), and it is uniformly sampled in the range and , respectively. The latter one further reduces the top-5 error to for AlexNet, which is lower than any single domain size (c.f. Fig. 1). This suggests that explicitly marginalizing samples can be beneficial. Next we test whether the improvement stands when using object proposals.

Introducing object proposals.

We deploy a proposal algorithm to generate “object” regions within the image. We use Edge Boxes [48], which provide a good trade-off between recall and speed [24].

First, we decide the number of proposals which will provide a satisfactory cover for the majority of objects present in the dataset. In a single image we search for the highest Intersection over Union (IoU) overlap between the ground-truth region and any proposed sample and in turn we evaluate the network’s performance on the most overlapping sample. We repeat this process for various number of proposals in a small subset of validation set and finally choose , which provides a satisfactory trade-off between classification performance and computational cost.

Among the extracted proposals, we choose the most informative subset for our task, based on pruning criteria that we introduce later. Next we discuss what other samples we use, which are also drawn in Fig. 2.

Domain-size pooling and regular crops.

We investigate the influence of domain-size pooling at test time both as stand-alone technique and as additional proposals for the final method which is described in Algorithm 1. We deploy domain-size aggregation of the network’s class posterior over sizes that are uniformly sampled in the range , where is the normalized size of the original image. After parameter search, we choose and . We use both the original and the horizontally flipped area, which gives samples in total.

Finally, we use standard data augmentation techniques from the literature. As customary, the image is isotropically rescaled to a predefined size, and then a predetermined selection of crops is extracted [28, 40, 42].

Figure 2: Visualizing different sampling strategies. Upper left: Object proposals. Generic proposals using Edge Boxes [48]. Upper right: Concentric domain sizes are centered at the center of the image. Below: Regular crops [28, 40, 42].

Pruning samples.

Continuing to sample patches within the image has diminishing return in terms of discriminability, while including more background patches with noisy class posterior distribution. We adopt an information-theoretic criterion to filter the samples that we use for the subsequent approximate marginalization.

For each proposal we evaluate the network and take the normalized softmax output , where and

on ILSVRC classification. The output is a set of non-negative numbers which sum up to 1. We can interpret the vector

as a probability distribution on the discrete space of classes

and compute the Rényi entropy as .

Our conjecture is that more discriminative class distributions tend to be more peaky with less ambiguity among the classes, and therefore lower entropy. In Fig. 3 we show how selecting a subset of image patches whose class posterior has lower entropy improves classification performance.

We extract candidate object proposals333We introduce a prior encouraging the largest proposals among the ones that the standard setting in [48] would give. To this end, instead of directly extracting, for example, proposals, we generate and keep the largest ones (Algorithm 1). [48] and evaluate the network for both the original candidates and their horizontal flips. Then we keep a small subset , whose posterior distribution has the lowest entropy. We use Rényi entropy with relatively small powers (), as we found that it encourages selecting regions with more than one highly-confident candidate object. While the parameter increases, the entropy is increasingly determined by the events of highest probability. Larger would be more effective for images with a single object, which is not the case in most images in ILSVRC.

Finally we introduce a weighted average of the selected posteriors as , where is the support of sample and is the weight of its posterior. We try both uniform weights and weights proportional to the inverse entropy of the posterior . The latter is expected to perform better, as it naturally gives higher weight to the most discriminative samples.

Figure 3: We show the top-5 error as a function of the number of proposals we average to produce the final posterior. Samples are generated with Algorithm 1 and classified with AlexNet. The blue curve corresponds to selecting samples with the lowest-entropy posteriors. We compare our method with simple strategies such as random selection, ranking by largest-size or highest confidence of proposals. The random sample selection was run times and we visualize the estimated confidence intervals as error-bars. Empirically, the discriminative power of the classifier increases when the samples are selected with the least entropy criterion.
   Object proposals. We extract several object proposals from the image (e.g., Edge Boxes [48] and keep the largest ones). Among them we choose proposals whose class posterior has the lowest Rényi entropy with parameter . After hyper-parameter search, we choose , and .
   concentric domain sizes around the center of (including their horizontal flip). We use sizes that are uniformly extracted in the normalized range , where corresponds to the whole image ().
   crops. Regular crops; e.g., or in or scales, as in [28, 40, 42].
   The class conditionals are approximated as , where is either uniform or equals to the inverse entropy of the posterior .
Algorithm 1 Regular adaptive sampling in classification.
Method AlexNet VGG16
crops sizes proposals top-1 top-5 t (s/im) top-1 top-5 t (s/im)
43.00 19.96 0.01 33.89 13.24 0.06 1 1
41.50 18.69 0.06 27.55 9.29 0.48 10 10
41.01 18.05 0.66 27.44 9.12 1.34 50 50
40.58 17.97 0.16 27.23 8.88 1.26 30 30
40.41 17.55 0.82 27.14 8.85 3.48 150 150
40.00 17.86 0.08 28.16 9.46 0.60 10 10
39.38 17.08 0.22 26.94 8.83 1.08 20 20
39.36 17.07 0.46 26.76 8.68 1.88 40 40
40.18 17.53 1.26 25.60 8.24 3.02 160 40
38.91 16.63 25.28 7.91 170 30
38.05 16.19 1.34 25.19 8.11 4.38 170 22
37.69 15.83 25.11 8.01 180 32
(fast) 37.71 15.88 0.94 25.12 8.08 3.70 180 32
(W, fast) 37.57 15.82 1.28 25.11 8.02 3.80 180 32
(test set) 37.417 16.018 25.117 7.909 180 32
Table 2: Top-1 and top-5 errors on the ImageNet 2014 classification challenge. The rows 2–5 include the common data augmentation strategies in the literature [28, 40, 42] (i.e., regular sampling). The next three rows use concentric domain sizes that are uniformly sampled in the range with being the normalized size of the original image (c.f. Fig. 2). Finally, in the last seven rows, we introduce adaptive sampling, which consists of a data-driven object proposal algorithm [48] and an entropy criterion to select the most discriminative samples on the fly based on the extracted class posterior distribution. The last row shows results on the test set. stands for the number of samples that are evaluated for each method, while is the number of samples that are eventually element-wise averaged to produce one single vector with class confidences. The previous top-reported with regular sampling and our results are shown in bold.


To compare various sampling and inference strategies, we use the AlexNet and VGG16 models. All classification results in Table 2 refer to the validation set of the ILSVRC 2014 [10], except for the last row which demonstrates results on the test set. On the rows we show the performance of popular multi-crop methods [28, 40, 42]. Then we compare them with strategies that involve concentric domain sizes (rows ) and object proposals (rows ).

Before extracting the crops and in order to preserve the aspect ratio of each single image, we rescale it so that its minimum dimension is . The proposals are extracted at the original image resolution and then they are rescaled anisotropically to fit the model’s receptive field. Additionally, some multi-crop algorithms resize the image in different scales and then sample patches of fixed size densely over the image. Szegedy et al. [42] use scales and crops per scale, which yields patches in all. Following the methodology from Simonyan et al. [40], it is comparable to deploy scales and extract crops per scale ( regular grid with flips), for a total of crops over scales (row in Table 2).

The results, presented in Table 2, indicate as expected that scale jittering at test time improves the classification performance for both 10-crop and 50-crop strategies. Additionally, the 50-crop strategy is better than the 10-crop strategy for both models. The results on row 5 in bold are the lowest errors that can be achieved with these specific single models444Specifically, we use the VGG16 model which is trained without scale jittering at training and appears on the first row of D area in Table 3 in [40]. Pre-trained models for both AlexNet and VGG16 are publicly available with the MatConvNet toolbox [45]. Simonyan et al. in their evaluation with crops and scales report top-5 error on ImageNet 2014 validation. In contrast our implementation produces

, which can be attributed to using a different pre-trained model, as the initial weights are sampled from a zero-mean Gaussian distribution with standard deviation

and there might also be minor differences in the training process. using only regular crops.

Then we present our methods and observe that using the AlexNet network with concentric domain sizes outperforms most multi-crop algorithms even if it only evaluates and averages patches. Furthermore, combining it with common crops achieves the best results for both networks, even without using -scale jittering. One interpretation for these improvements is that the concentric samples serve a natural prior for the majority of ILSVRC images, i.e., the object of interest lies most probably at the center than at the image boundaries. This is a common assumption in the literature that also appears in large-scale video segmentation [26].

Following, we introduce the adaptive sampling mechanism with Algorithm 1 and reduce the top-5 error to and for AlexNet and VGG16 respectively. To set this in perspective, Krizhevsky et al. [28] report top-5 error when they combine 5 models. We improve this performance with one single model. The relative improvement for the deployed instances of AlexNet and VGG16, compared to the data-augmentation methods used in [40, 42], is and , respectively. Row shows results where the marginalization is weighted based on the entropy (notated as ), while the methods in rows use uniform weights (c.f. Algorithm 1). At the last row we show results from the ILSVRC test server for our top-performing method (row ).

Regular and concentric crops assume that objects occupy most of the image or appear near the center. This is a known bias in the ImageNet dataset. To analyze the effect of adaptive sampling, we calculate the intersection over union error between the objects and the regular and concentric crops, and show in Fig. 4 the performance of various methods as a function of the IoU error. The improvement of using adaptive sampling (via proposals) over only regular and concentric crops is increased as IoU error grows, indicating that objects occupy less domain or are far away from the center.

Figure 4: Classification error as a function of the IoU error between the objects and the regular and concentric crops.

Time complexity.

In Table 2 we show the number of evaluated samples () and the subset that is actually averaged () to extract a single class posterior vector. The sequential time needed for each method is linear to the number of evaluated patches . We run the experiments with the MatConvNet library and parallelize the load for VGG16 so that the testing is done in batches of patches. We report the time profile555We use a machine equipped with a NVIDIA Tesla K80 GPU, 24 Intel Xeon E5 cores and 64G RAM memory. for each method in Table 2. A few entries cover two boxes, as their methods are evaluated together. Extracting the proposals is not a major bottleneck if using an efficient algorithm [24], such as Edge Boxes [48]. In rows

we report results of our faster version, where the Edge Boxes do not leverage edge sharpening and use one decision tree. Overall, compared to the

-crop strategy, the object proposal scheme introduces marginal computational overhead.

Figure 5: Head to head comparison between CNN and DSP-CNN on the Oxford [33] (left) and Fischer’s [16] (center) datasets. The layer-4 features of the unsupervised network from [16] are used as descriptors. The DSP-CNN outperforms its CNN counterpart in terms of matching mAP by and , respectively. Right: DSP-CNN performs comparably to the state-of-the-art DSP-SIFT descriptor [12].

2.2 Wide-Baseline Correspondence

We test the effect of domain-size pooling in correspondence tasks with a convolutional architecture, as done by [12] for SIFT [30], using the datasets and protocols of [16]. This is illustrated in Fig. 2 (upper right), but here the domain sizes are centered around the detector. We expect that such averaging will increase the discriminability of detected regions and in turn the matching ability, similar to the benefits that we see on the last rows of Table 1.

We use maximally-stable extremal regions (MSER) [32] to detect candidate regions, affine-normalize them, align them to the dominant orientation, and re-scale them for head-to-head comparisons. For a detected scale at each MSER, the DSP-CNN samples domain sizes within a neighborhood around it, computes the CNN responses on these samples and averages the posteriors. The deployed deep network is the unsupervised convolutional network proposed by [16], which is trained with surrogate labels from an unlabeled dataset (see the methodology in [13]), with the objective of being invariant to several transformations that are commonly observed in images captured from different viewpoints. As opposed to network-classifiers, here the task is correspondence and the network is purely a region descriptor, whose last two layers ( and ) are the representations.

Method Dim mAP
Raw patch 4,761 34.79
SIFT [30] 128 45.32
DSP-SIFT [12] 128 53.72
CNN-L3 [16] 9,216 48.99
CNN-L4 [16] 8,192 50.55
DSP-CNN-L3 9,216 52.76
DSP-CNN-L4 8,192 53.07
DSP-CNN-L3-L4 17,408 53.74
DSP-CNN-L3 (PCA128) 128 51.45
DSP-CNN-L4 (PCA128) 128 52.33
DSP-CNN-L34 (concat. PCA128) 256 52.69
Table 3: Matching mean average precision for different approaches on Fischer’s dataset [16].

In Fig. 5 (left) we show the comparison between CNN and DSP-CNN on Oxford dataset [33]. CNN’s layer 4 is the representation for each MSER and DSP-CNN simply averages this layer’s responses for all domain sizes. We use , and sizes that are uniformly sampled in this neighborhood. There is a improvement based on the matching mean average precision.

Fischer’s dataset [16] includes pairs of images, some of them with more extreme transformations than those in the Oxford dataset. The types of transformations include zooming, blurring, lighting change, rotation, perspective and nonlinear transformations. In Fig. 5 (center) and Table 3 we show comparisons between CNN and DSP-CNN for layer-3 and layer-4 representations and demonstrate and relative improvement. We use , and domain sizes. These parameters are selected with cross-validation. In Table 3 we show comparisons with baselines, such as using the raw data and DSP-SIFT [12]. After fine parameter search (, ) and concatenating the layers and , we achieve state of the art performance as shown in Fig. 5 (right), observing though the high dimensionality of this method compared to local descriptors.

Given the inherent high-dimensionality of CNN layers, we perform dimensionality reduction with principal component analysis to investigate how this affects the matching performance. In Table

3 we show the performance for compressed layer-3 and layer-4 representations with PCA to dimensions and their concatenation. There is a modest performance loss, yet the compressed features outperform the single-scale features by a large margin.

3 Discussion

Our empirical analysis indicates that CNNs, that are designed to be invariant to nuisance variability due to small planar translations – by virtue of their convolutional architecture and local spatial pooling – and learned to manage global translation, distance (scale) and shape (aspect ratio) variability by means of large annotated datasets, in practice are less effective than a naive and in theory counter-productive practice of sampling and averaging the conditionals based on an ad-hoc choice of bounding boxes and their corresponding planar translation, scale and aspect ratio.

This has to be taken with the due caveats: First, we have shown the statement empirically for few choices of network architectures (AlexNet and VGG), trained on particular datasets that are unlikely to be representative of the complexity of visual scenes (although they may be representative of the same scenes as portrayed in the test set), and with a specific choice of parameters made by their respective authors, both for the classifier and for the evaluation protocol. To test the hypothesis in the fairest possible setting, we have kept all these choices constant while comparing a CNN trained, in theory, to “marginalize” the nuisances thus described, with the same applied to bounding boxes provided by a proposal mechanism. To address the arbitrary choice of proposals, we have employed those used in the current state-of-the-art methods, but we have found the results representative of other choices of proposals.

In addition to answering the question posed in the introduction, along the way we have shown that by framing the marginalization of nuisance variables as the averaging of a sub-sampling of marginal distributions we can leverage of concepts from classical sampling theory to anti-alias the overall classifier, which leads to a performance improvement both in categorization, as measured in the ImageNet benchmark, and correspondence, as measured in the Oxford and Fischer’s matching benchmarks.

Of course, like any universal approximator, a CNN can in principle capture the geometry of the discriminant surface by “learning away” nuisance variability, given sufficient resources in terms of layers, number of filters, and number of training samples. So in the abstract sense a CNN can indeed marginalize out nuisance variability. The analysis conducted show that, at the level of complexity imposed by current architectures and training set, it does so less effectively than ad-hoc averaging of proposal distributions.

This leaves researchers the choice of investing more effort in the design of proposal mechanisms [18, 36], subtracting duties from the Category CNN downstream, or invest more effort in scaling up the size and efficiency of learning algorithms for general CNNs so as to render the need for a proposal scheme moot.


This research is supported by ARO W911NF-15-1-0564/66731-CS, ONR N00014-13-1-034, and AFOSR FA9550-15-1-0229. We gratefully acknowledge NVIDIA Corporation for donating a K40 GPU that was used in support of some of the experiments.



  • [1] B. Alexe, T. Deselaers, and V. Ferrari. Measuring the objectness of image windows. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012.
  • [2] F. Anselmi, L. Rosasco, and T. Poggio. On Invariance and Selectivity in Representation Learning. arXiv preprint arXiv:1503.05938, 2015.
  • [3] H. Bilen, M. Pedersoli and T. Tuytelaars. Weakly supervised object detection with posterior regularization. In British Machine Vision Conference, 2014.
  • [4] J. Bruna and S. Mallat. Invariant scattering convolution networks. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013.
  • [5] J. Carreira and C. Sminchisescu. CPMC: Automatic object segmentation using constrained parametric min-cuts. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012.
  • [6] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In British Machine Vision Conference, 2014.
  • [7] M. Cheng, Z. Zhang, W. Lin, and P. Torr.

    BING: Binarized normed gradients for objectness estimation at 300fps.


    IEEE Conference on Computer Vision and Pattern Recognition

    , 2014.
  • [8] T. Cohen and M. Welling. Learning the irreducible representations of commutative lie groups. In

    International Conference on Machine Learning

    , 2014.
  • [9] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, 2012.
  • [10] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and F.-F. Li. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition, 2009.
  • [11] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In IEEE Conference on Computer Vision and Pattern Recognition, 2015.
  • [12] J. Dong and S. Soatto. Domain-size pooling in local descriptors: DSP-SIFT. In IEEE Conference on Computer Vision and Pattern Recognition, 2015.
  • [13] A. Dosovitskiy, J. Springenberg, M. Riedmiller and T. Brox. Unsupervised feature learning by augmenting single images. In Advances in Neural Information Processing Systems, 2014.
  • [14] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2014.
  • [15] M. Everingham, L. V. Gool, C. Williams, J. Winn, and A. Zisserman. The Pascal Visual Object Classes (VOC) challenge. In International Journal of Computer Vision, 2010.
  • [16] P. Fischer, A. Dosovitskiy, and T. Brox. Descriptor matching with convolutional neural networks: a comparison to sift. arXiv preprint arXiv:1405.5769, 2014.
  • [17] R. Gens, and P. Domingos. Deep Symmetry Networks. In Advances in Neural Information Processing Systems, 2014.
  • [18] R. Girshick. Fast R-CNN. In IEEE International Conference on Computer Vision, 2015.
  • [19] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, 2014.
  • [20] I. Goodfellow, H. Lee, Q. V. Le, A. Saxe, and A. Y. Ng. Measuring invariances in deep networks. In Advances in Neural Information Processing Systems, 2009.
  • [21] I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations, 2015.
  • [22] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In IEEE European Conference on Computer Vision, 2014.
  • [23] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In IEEE International Conference on Computer Vision, 2015.
  • [24] J. Hosang, R. Benenson, P. Dollár, and B. Schiele. What makes for effective detection proposals? In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015.
  • [25] A. Humayun, F. Li, and J. M. Rehg. RIGOR: Reusing inference in graph cuts for generating object regions. In IEEE Conference on Computer Vision and Pattern Recognition, 2014.
  • [26] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2014.
  • [27] P. Krähenbühl and V. Koltun. Geodesic object proposals. In IEEE European Conference on Computer Vision, 2014.
  • [28] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012.
  • [29] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. In

    International Conference on Artificial Intelligence and Statistics

    , 2015.
  • [30] D. G. Lowe. Distinctive image features from scale-invariant keypoints. In International Journal of Computer Vision, 2004.
  • [31] S. Manen, M. Guillaumin, and L. V. Gool. Prime object proposals with randomized Prim’s algorithm. In IEEE International Conference on Computer Vision, 2013.
  • [32] J. Matas, O. Chum, M. Urban, and T. Pajdla. Robust wide-baseline stereo from maximally stable extremal regions. In Image and Vision Computing, 2004.
  • [33] K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. Van Gool. A comparison of affine region detectors. In International Journal of Computer Vision, 2005.
  • [34] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In IEEE Conference on Computer Vision and Pattern Recognition, 2015.
  • [35] E. Rahtu, J. Kannala, and M. Blaschko. Learning a category independent object detection cascade. In IEEE International Conference on Computer Vision, 2011.
  • [36] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, 2015.
  • [37] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In International Conference on Learning Representations, 2014.
  • [38] C. E. Shannon. A mathematical theory of communication. In ACM SIGMOBILE Mobile Computing and Communications Review, 2001.
  • [39] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In International Conference on Learning Representations, 2014.
  • [40] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
  • [41] S. Soatto and A. Chiuso. Visual Representations: Defining properties and deep approximation. In International Conference on Learning Representations, 2016.
  • [42] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recognition, 2015.
  • [43] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014.
  • [44] J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recognition. International Journal of Computer Vision, 2013.
  • [45] A. Vedaldi and K. Lenc. MatConvNet: Convolutional neural networks for MATLAB. In ACM Conference on Multimedia Conference, 2015.
  • [46] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, 2014.
  • [47] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In IEEE European Conference on Computer Vision, 2014.
  • [48] C. L. Zitnick and P. Dollár. Edge boxes: Locating object proposals from edges. In IEEE European Conference on Computer Vision, 2014.