Domain-Size Pooling in Local Descriptors: DSP-SIFT

12/30/2014 ∙ by Jingming Dong, et al. ∙ 0

We introduce a simple modification of local image descriptors, such as SIFT, based on pooling gradient orientations across different domain sizes, in addition to spatial locations. The resulting descriptor, which we call DSP-SIFT, outperforms other methods in wide-baseline matching benchmarks, including those based on convolutional neural networks, despite having the same dimension of SIFT and requiring no training.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 13

Code Repositories

ASV

[CVPR16] Accumulated Stability Voting: A Robust Descriptor from Descriptors of Multiple Scales


view repo

dsp_sift_cpp

Implementation of dsp sift in c++ (in development)


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Local image descriptors, such as SIFT [27] and its variants, are designed to reduce variability due to illumination and vantage point while retaining discriminative power. This facilitates finding correspondence between different views of the same underlying scene. In a wide-baseline matching task on the Oxford benchmark [30, 31], nearest-neighbor SIFT descriptors achieve a mean average precision (mAP) of , a improvement over direct comparison of normalized grayscale values. Other datasets yield similar results [32]. Functions that reduce sensitivity to nuisance variability can also be learned from data [29, 33, 43, 45, 48]. Convolutional neural networks (CNNs) can been trained to “learn away” nuisance variability while retaining class labels using large annotated datasets. In particular, [16] uses (patches of) natural images as surrogate classes and adds transformed versions to train the network to discount nuisance variability. The activation maps in response to image values can be interpreted as a descriptor and used for correspondence. [13, 16] show that the CNN outperforms SIFT, albeit with a much larger dimension. Here we show that a simple modification of SIFT, obtained by pooling gradient orientations across different domain sizes (“scales”), in addition to spatial locations, improves it by a considerable margin, also outperforming the best CNN. We call the resulting descriptor “domain-size pooled” SIFT, or DSP-SIFT.

Figure 1: In SIFT (top, recreated according to [27]) isolated scales are selected LABEL: and the descriptor constructed from the image at the selected scale LABEL: by computing gradient orientations LABEL: and pooling them in spatial neighborhoods LABEL: yielding histograms that are concatenated and normalized to form the descriptor LABEL:. In DSP-SIFT (bottom), pooling occurs across different domain sizes LABEL:: Patches of different sizes are re-scaled LABEL:, gradient orientation computed LABEL: and pooled across locations and scales LABEL:, and concatenated yielding a descriptor LABEL: of the same dimension of ordinary SIFT.

Pooling across different domain sizes is implemented in few lines of code, can be applied to any histogram-based method (Sect. 3), and yields a descriptor of the same size that outperforms the original essentially uniformly (Fig. 4

). Yet combining histograms of images of different sizes is counterintuitive and seemingly at odds with the teachings of scale-space theory and the resulting established practice of

scale selection [26] (Sect. 1.1). It is, however, rooted in classical sampling theory and anti-aliasing. Sect. 2 describes what we do, Sect. 3 how we do it, and Sect. 5 why we do it. Sect. 4 validates our method empirically.

1.1 Related work

A single, un-normalized cell of the “scale-invariant feature transform” SIFT [27] and its variants [2, 8, 10] can be written compactly as a formula [12, 46]:

(1)

where is the image restricted to a square domain, centered at a location with size in the lattice determined by the response to a difference-of-Gaussian (DoG) operator across all locations and scales (SIFT detector). Here , is the independent variable, ranging from to , corresponding to an orientation histogram bin of size , and is the spatial pooling scale. The kernel is bilinear of size and separable-bilinear of size [46]

, although they could be replaced by a Gaussian with standard deviation

and an angular Gaussian with dispersion parameter . The SIFT descriptor is the concatenation of cells (1) computed at locations on a lattice , and normalized.

The spatial pooling scale and the size of the image domain where the SIFT descriptor is computed are tied to the photometric characteristics of the image, since is derived from the response of a DoG operator on the (single) image.111Approaches based on “dense SIFT” forgo the detector and instead compute descriptors on a regular sampling of locations and scales (Fig. 9). However, no existing dense SIFT method performs domain-size pooling. Such a response depends on the reflectance properties of the scene and optical characteristics and resolution of the sensor, neither of which is related to the size and shape of co-visible (corresponding) regions. Instead, how large a portion of a scene is visible in each corresponding image(s) depends on the shape of the scene, the pose of the two cameras, and the resulting visibility (occlusion) relations. Therefore, we propose to untie the size of the domain where the descriptor is computed (“scale”) from photometric characteristics of the image, departing from the teachings of scale selection (Fig. 8). Instead, we use basic principles of classical sampling theory and anti-aliasing to achieve robustness to domain size changes due to occlusions (Sect. 5).

Pooling is commonly understood as the combination of responses of feature detectors/descriptors at nearby locations, aimed at transforming the joint feature representation into a more usable one that preserves important information (intrinsic variability) while discarding irrelevant detail (nuisance variability) [4, 22]. However, precisely how pooling trades off these two conflicting aims is unclear and mostly addressed empirically in end-to-end comparisons with numerous confounding factors. Exceptions include [4]

, where intrinsic and nuisance variability are combined and abstracted into the variance and distance between the means of scalar random variables in a binary classification task. For more general settings, the goals of reducing nuisance variability while preserving intrinsic variability is elusive as a

single image does not afford the ability to separate the two [12].

An alternate interpretation of pooling as anti-aliasing [40] clearly highlights its effects on intrinsic and nuisance variability: Because one cannot know what portion of an object or scene will be visible in a test image, a scale-space (“semi-orbit”) of domain sizes (“receptive fields”) should be marginalized or searched over (“max-out”). Neither can be computed in closed-form, so the semi-orbit has to be sampled. To reduce complexity, only a small number of samples should be retained, resulting in undersampling and aliasing phenomena that can be mitigated by anti-aliasing, with quantifiable effects on the sensitivity to nuisance variability. For the case of histogram-based descriptors, anti-aliasing planar translations consists of spatial pooling, routinely performed by most descriptors. Anti-aliasing visibility results in domain-size aggregation, which no current descriptor practices. This interpretation also offers a way to quantify the effects of pooling on discriminative (reconstruction) power directly, using classical results from sampling theory, rather than indirectly through an end-to-end classification experiment that may contain other confounding factors.

Domain-size pooling can be applied to a number of different descriptors or convolutional architectures. We illustrate its effects on the most popular, SIFT. However, we point out that proper marginalization requires the availability of multiple images of the same scene, and therefore cannot be performed in a single image. While most local image descriptors are computed from a single image, exceptions include [12, 25]. Of course, multiple images can be “hallucinated” from one, but the resulting pooling operation can only achieve invariance to modeled transformations.

In neural network architectures, there is evidence that abstracting spatial pooling hierarchically, i.e., aggregating nearby responses in feature maps, is beneficial [4]. This process could be extended by aggregating across different neighborhood sizes in feature space. To the best of our knowledge, the only architecture that performs some kind of pooling across scales is [34], although the justification provided in [5] only concerns translation within each scale. The same goes for [6], where pooling (low-pass filtering) is only performed within each scale, and not across scales. Other works learn the regions for spatial pooling, for instance [22, 37], but still restrict pooling to within-scale, similar to [23], rather than across scales as we advocate.

We distinguish multi-scale methods that concatenate descriptors computed independently at each scale, from cross-scale pooling, where statistics of the image at different scales are combined directly in the descriptor. Examples of the former include [21], where ordinary SIFT descriptors computed on domains of different size are assumed to belong to a linear subspace, and [37]

, where Fisher vectors are computed for multiple sizes and aspect ratios and spatial pooling occurs within each level. Also bag-of-word (BoW) methods

[38], as mid-level representations, aggregate different low level descriptors by counting their frequency after discretization. Typically, vector quantization or other clustering technique is used, each descriptor is associated with a cluster center (“word”), and the frequency of each word is recorded in lieu of the descriptors themselves. This can be done for domain size, by computing different descriptors at the same location, for different domain sizes, and then counting frequencies relative to a dictionary learned from a large training dataset (Sect. 4.4).

Aggregation across time, which may include changes of domain size, is advocated in [20], but in the absence of formulas it is unclear how this approach relates to our work. In [14], weights are shared across scales, which is not equivalent to pooling, but still establishes some dependencies across scales. MTD [24] appears to be the first instance of pooling across scales, although the aggregation is global in scale-space with consequent loss of discriminative power. Most recently, [18] advocates the same but in practice space-pooled VLAD descriptors obtained at different scales are simply concatenated. Also [3] can be thought of as a form of pooling, but the resulting descriptor only captures the mean of the resulting distribution. In addition, [44]

exploits the possibility of estimating the proper scales for nearby features via scale propagation but still no pooling is performed across scales. Additional details in related prior work are discussed in Appendix

A.

2 Domain-Size Pooling

If SIFT is written as (1), then DSP-SIFT is given by

(2)

where is the size-pooling scale and is an exponential or other unilateral density function. The process is visualized in Fig. 1. Unlike SIFT, that is computed on a scale-selected lattice , DSP-SIFT is computed on a regularly sampled lattice . Computed on a different lattice, the above can be considered as a recipe for DSP-HOG [10]. Computed on a tree, it can be used to extend deformable-parts models (DPM) [15] to DSP-DPM. Replacing with other histogram-based descriptor “X” (for instance, SURF [2]), the above yields DSP-X. Applied to a hidden layer of a convolutional network, it yields a DSP-CNN, or DSP-Deep-Fisher-Network [36]. The details of the implementation are in Sect. 3.

While the implementation of DSP is straightforward, its justification is less so. We report the summary in Sect. 5 and the detailed derivation in Appendix B, that provides a theoretical justification and conditions under which the resulting descriptors are valid. In Sect. 4 we compare DSP-SIFT to alternate approaches. Motivated by the experiments of [31, 32] that compare local descriptors, we choose SIFT as a paragon and compare it to DSP-SIFT on the standard benchmark [31]. Motivated by [16]

that compares SIFT to both supervised and unsupervised CNNs trained on Imagenet and Flickr respectively on the same benchmark

[31], we submit DSP-SIFT to the same protocol. We also run the test on the new synthetic dataset introduced by [16], that yields the same qualitative assessment.

Clearly, domain-size pooling of under-sampled semi-orbits cannot outperform fine sampling, so if we were to retain all the scale samples instead of aggregating them, performance would further improve. However, computing and matching a large collection of SIFT descriptors across different scales would incur significantly increased computational and storage costs. To contain the latter, [21]

assumes that descriptors at different scales populate a linear subspace and fit a high-dimensional hyperplane. The resulting Scale-less SIFT (SLS) outperforms ordinary SIFT as shown in Fig. 

7. However, the linear subspace assumption breaks when considering large scale changes, so SLS is outperformed by DSP-SIFT despite the considerable difference in (memory and time) complexity.

3 Implementation and Parameters

Following other evaluation protocols, we use Maximally Stable Extremal Regions (MSER) [28] to detect candidate regions, affine-normalize, re-scale and align them to the dominant orientation. For a detected scale , DSP-SIFT samples scales within a neighborhood around it. For each scale-sampled patch, a single-scale un-normalized SIFT descriptor (1) is computed on the SIFT scale-space octave corresponding222This is an updated version of the protocol described in [16], as discussed in detail in Appendix D. to the sampled scale . By choosing to be a uniform density, these raw histograms of gradient orientations at different scales are accumulated and normalized333We follow the practice of SIFT [27] to normalize, clamp and re-normalize the histograms, with the clamping threshold set to 0.067 empirically. (2). Fig. 2 shows the mean average precision (defined in Sect. 4.2) for different domain size pooling ranges. Improvements are observed as soon as more than one scale is used, with diminishing return: Performance decreases with domain size pooling radius exceeding . Fig. 2 shows the effect of the number of size samples used to construct DSP-SIFT. Although the more samples the merrier, three size samples are sufficient to outperform ordinary SIFT, and improvement beyond samples is minimal. Additional samples do not further increase the mean average precision, but incur more computational cost. In the evaluation in Sect. 4, we use and . These parameters are empirically selected on the Oxford dataset [30, 31].

Figure 2: Mean Average Precision for different parameters. LABEL: shows that mAP changes with the radius of DS pooling. The best mAP is achieved at ; LABEL: shows mAP as a function of the number of samples used within the best range ().

4 Validation

As a baseline, the RAW-PATCH descriptor (named following [16]) is the unit-norm grayscale intensity of the affine-rectified and resized patch of a fixed size ().

The standard SIFT, which is widely accepted as a paragon [30, 32], is computed using the VLFeat library [46]. Both SIFT and DSP-SIFT are computed on the SIFT scale-space corresponding to the detected scales. Instead of mapping all patches to an arbitrarily user-defined size, we use the area of each selected and rectified MSER region to determine the octave level in the scale-space where SIFT (as well as DSP-SIFT) is to be computed.

Scale-less SIFT (SLS) is computed using the source code provided by the authors [21]: For each selected and rectified patch, the standard SIFT descriptors are computed at scales from a scale range of , and the standard PCA subspace dimension is set to , yielding a final descriptor of dimension after a subspace-to-vector mapping.

To compare DSP-SIFT to a convolutional neural network, we use the top-performer in [16], an unsupervised model pre-trained on natural images undergoing transformations each (total M). The responses at the intermediate layers (CNN-L3) and (CNN-L4) are used for comparison, following [16]. Since the network requires input patches of fixed size, we tested and report the results on both (PS69) and (PS91) as in [16].

Although no direct comparison with Multiscale Template Descriptors (MTD) [24] is performed, SLS can be considered as dominating it since it uses all scales without collapsing them into a single histogram. The derivation in Sect. 5 suggests, and empirical evidence in Fig. 2 confirms, that aggregating the histogram across all scales significantly reduces discriminative power. Sect. 4.4 compares DSP-SIFT to a BoW which pools SIFT descriptors computed at different sizes at the same location.

4.1 Datasets

The Oxford dataset [30, 31] comprises pairs of images of mostly planar scenes seen under different pose, distance, blurring, compression and lighting. They are organized into categories undergoing increasing magnitude of transformations. While routinely used to evaluate descriptors, this dataset has limitations in terms of size and restriction to mostly planar scenes, modest scale changes, and no occlusions. Fischer et al. [16] recently introduced a dataset of pairs of images with more extreme transformations including zooming, blurring, lighting change, rotation, perspective and nonlinear transformations.

4.2 Metrics

Following [30], we use precision-recall (PR) curves to evaluate descriptors. A match between two descriptors is called if their Euclidean distance is less than a threshold . It is then labeled as a true positive if the area of intersection over union (IoU) of their corresponding MSER-detected regions is larger than . Both datasets provide ground truth mapping between images, so the overlapping is computed by warping the first MSER region into the second image and then computing the overlap with the second MSER region. Recall is the fraction of true positives over the total number of correspondences. Precision is the percentage of true matches within the total number of matches. By varying the distance threshold , a PR curve can be generated and average precision (AP, a.k.a area under the curve, AUC) can be estimated. The average of APs provides the mean average precision (mAP) scores used for comparison.

Figure 3: Average Precision for different magnitude of transformations. The left panels show (AP) for increasing magnitude of the transformations in the Oxford dataset [30]. The mean AP over all pairs with corresponding amount of transformation are shown in the middle of the third row. The right panels show the same for Fischer’s dataset [16].
Figure 4: Head-to-head comparisons. Similarly to [16], each point represents one pair of images in the Oxford (top) and Fischer (bottom) datasets. The coordinates indicate average precision for each of the two methods under comparison. SIFT is superior to RAW-PATCH, but is outperformed by DSP-SIFT and CNN-L4. The right two columns show that DSP-SIFT is better than SLS and CNN-L4 despite the difference in dimensions (shown in the axes). The relative performance improvement of the winner is shown in the title of each panel.

4.3 Comparison

Fig. 3 shows the behavior of each descriptor for varying degree of severity of each transformation. DSP-SIFT consistently outperforms other methods when there are large scale changes (zoom). It is also more robust to other transformations such as blur, lighting and compression in the Oxford dataset [31], and to nonlinear, perspective, lighting, blur and rotation in Fischer’s [16]

. DSP-SIFT is not at the top of the list of all compared descriptors in viewpoint change cases, although “viewpoint” is a misnomer as MSER-based rectification accounts for most of the viewpoint variability, and the residual variability is mostly due to interpolation and rectification artifacts. The fact that DSP-SIFT outperforms CNN in nearly all cases in Fischer’s dataset is surprising, considering that the neural network is trained by augmenting the dataset using similar types of transformations.

Fig. 4 shows head-to-head comparisons between these methods, in the same format of [16]. DSP-SIFT outperforms SIFT by and on Oxford and Fischer respectively. Only on two out of pairs of images in Fischer dataset does domain-size pooling negatively affect the performance of SIFT, but the decrease is rather small. DSP-SIFT improves SIFT on every pair of images in the Oxford dataset. The improvement of DSP-SIFT comes without increase in dimension. In comparison, CNN-L4 achieves and improvements over SIFT by increasing dimension -fold. On both datasets, DSP-SIFT also consistently outperforms CNN-L4 and SLS despite its lower dimension.

4.4 Comparison with Bag-of-Words

To compare DSP-SIFT to BoW we computed SIFT at scales on concentric regions with dictionary sizes ranging from to , trained on over K SIFT descriptors computed on samples from ILSVRC-2013 [11]. To make the comparison fair, the same scales are used to compute DSP-SIFT. By doing so, the only difference between these two methods is how to pool across scales rather than what or where to pool. In SIFT-BOW, pooling is performed by encoding SIFTs from nearby scales using the quantized visual dictionary, while DSP-SIFT combines the histograms of gradient orientations across scales directly. To compute similarity between SIFT-BOWs, we tested both the intersection kernel and norm, and achieved a best performance with the latter at mAP on Oxford and on Fischer. Fig. 5 shows the direct comparison between DSP-SIFT and SIFT-BOW with the former being a clear winner.

Figure 5: DSP-SIFT vs. SIFT-BOW. Similarly to Fig. 4, each point represents one pair of images in the Oxford (left) and Fischer (right) datasets. The coordinates indicate average precision for each of the two methods under comparison. The relative performance improvement of the winner is shown in the title of each panel. DSP-SIFT outperforms SIFT-BOW by a wide margin on both datasets.

4.5 Complexity and Performance Tradeoff

Fig. 7 shows the complexity (descriptor dimension) and performance (mAP) tradeoff. Table 1 summarizes the results. In Fig. 7, an “ideal” descriptor would achieve mAP by using the smallest possible number of bits and land at the top-left corner of the graph. DSP-SIFT has the same lowest complexity as SIFT and is the best in mAP among all the descriptors. Looking horizontally in the graph, DSP-SIFT outperforms all the other methods at a fraction of complexity. SLS achieves the second best performance but at the cost of a -fold increase in dimension. In general, the performance of CNN descriptors is worse than DSP-SIFT but, interestingly, their mAPs do not change significantly if the network responses are computed on a resampled patch of size to obtain lower dimensional descriptors.

4.6 Comparison with SIFT on Larger Domain Sizes

Descriptors computed on larger domain sizes are usually more discriminative, up to the point where the domain straddles occluding boundaries (Fig. 10). When using a detector, the size of the domain is usually chosen to be a factor of the detected scale, which affects performance in a way that depends on the dataset and the incidence of occlusions. In our experiments, this parameter (dilation factor) is set at 3, following [30], and we note that DSP-SIFT is less sensitive than ordinary SIFT to this parameter. Since DSP-SIFT aggregates domains of various sizes (smaller and larger) around the nominal size, it is important to ascertain whether the improvement in DSP-SIFT comes from size pooling, or simply from including larger domains. To this end, we compare DSP-SIFT by pooling domain sizes from th through rd of the scale determined by the detector, to a single-size descriptor computed at the largest size (SIFT-L). This establishes that the increase in performance of DSP-SIFT over ordinary SIFT comes from pooling across domain sizes, not just by picking larger domain sizes. In the example in Fig. 6, the largest domain size yields an even worse performance than the detection scale (Fig. 6

). In a more complex scene where the test images exhibit occlusion, this will be even more pronounced as there is a tradeoff between discriminative power (calling for a larger size) and the probability of straddling an occlusion (calling for a smaller size).

Method Dim. mAP
   Oxford    Fischer
SIFT 128 .2750 .4532
DSP-SIFT 128 .3936 .5372
CNN-L4-PS69 512 .3059 .4779
SIFT-BOW 2048 .2062 .3963
CNN-L3-PS69 4096 .3164 .4858
CNN-L4-PS91 8192 .3068 .5055
SLS 8256 .3320 .5135
RAW-PATCH 8281 .1600 .3479
CNN-L3-PS91 9216 .3056 .4899
Table 1: Summary of complexity (dimension) and performance (mAP) for all descriptors sorted in order of increasing complexity. The lowest complexities and the best performances are highlighted in bold. We also report mAP for CNN descriptors computed on patches as in [16]. The fourth row shows comparison with a bag-of-words of SIFT descriptors computed at the same location but different domain sizes, described in detail in Sect. 4.4.
Figure 6: DSP-SIFT vs. SIFT-L. Similarly to Fig. 4, each point represents one pair of images in the Oxford dataset. The coordinates indicate average precision for each of the two methods under comparison. The relative performance improvement of the winner is shown in the title of each panel. 6 shows that DSP-SIFT outperforms SIFT computed at the largest domain size. This shows that the improvement of DSP-SIFT comes from the pooling across domain sizes rather than choosing a larger domain size. 6 shows that choosing a larger domain size actually decreases the performance on the Oxford dataset.
Figure 7: Complexity-Performance Tradeoff. The abscissa is the descriptor dimension shown in log-scale, the ordinate shows the mean average precision.

5 Derivation

In this section we describe the trace of the derivation of DSP-SIFT, which is reported in Appendix B. Crucial to the derivation is the interpretation of a descriptor as a likelihood function [40].

1.  The likelihood function of the scene given images is a minimal sufficient statistic of the latter for the purpose of answering questions on the former [1]. Invariance to nuisance transformations induced by (semi-)group actions on the data can be achieved by representing orbits, which are maximal invariants [35]. The planar translation-scale group can be used as a crude first-order approximation of the action of the translation group in space (viewpoint changes) including scale change-inducing translations along the optical axis. This draconian assumption is implicit in most single-view descriptors.

2.  Comparing (semi-)orbits entails a continuous search (non-convex optimization) that has to be discretized for implementation purposes. The orbits can be sampled adaptively, through the use of a co-variant detector and the associated invariant descriptor, or regularly – as customary in classical sampling theory.

3.  In adaptive sampling, the detector should exhibit high sensitivity to nuisance transformations (e.g., small changes in scale should cause a large change in the response to the detector, thus providing accurate scale localization) and the descriptor should exhibit small sensitivity (so small errors in scale localization cause a small change in the descriptor). Unfortunately, for the case of SIFT (DoG detector and gradient orientation histogram descriptor), the converse is true.

4.  Because correspondence entails search over samples of each orbit, time complexity increases with the number of samples. Undersampling introduces structural artifacts, or “aliases,” corresponding to topological changes in the response of the detector. These can be reduced by “anti-aliasing,” an averaging operation. For the case of (approximations of) the likelihood function, such as SIFT and its variants, anti-aliasing corresponds to pooling. While spatial pooling is common practice, and reduces sensitivity to translation parallel to the image plane, scale pooling – which would provide insensitivity to translation orthogonal to the image plane – and domain-size pooling – which would provide insensitivity to small changes of visibility, are not. This motivates the introduction of DSP-SIFT, and the rich theory on sampling and anti-aliasing could provide guidelines on what and how to pool, as well as bounds on the loss of discriminative power coming from undersampling and anti-aliasing operations.

Figure 8: Scale-space vs. Size-space. Scale-space refers to a continuum of images obtained by smoothing and downsampling a base image. It is relevant to searching for correspondence when the distance to the scene changes. Size-space refers to a scale-space obtained by maintaining the same scale of the base image, but considering subsets of it of variable size. It is relevant to searching for correspondence in the presence of occlusions, so the size (and shape) of co-visible domains are not known.

6 Discussion

Image matching under changes of viewpoint, illumination and partial occlusions is framed as a hypothesis testing problem, which results in a non-convex optimization over continuous nuisance parameters. The need for efficient test-time performance has spawned an industry of engineered descriptors, which are computed locally so the effects of occlusions can be reduced to a binary classification (co-visible, or not). The best known is SIFT, which has been shown to work well in a number of independent empirical assessments [30, 32], that however come with little analysis on why it works, or indications on how to improve it. We have made a step in that direction, by showing that SIFT can be derived from sampling considerations, where spatial binning and pooling are the result of anti-aliasing operations. However, SIFT and its variants only perform such operations for planar translations, whereas our interpretation calls for anti-aliasing domain-size as well. Doing so can be accomplished in few lines of code and yields significant performance improvements. Such improvements even place the resulting DSP-SIFT descriptor above a convolutional neural network (CNN), that had been recently reported as a top performer in the Oxford image matching benchmark [16]. Of course, we are not advocating replacing large neural networks with local descriptors. Indeed, there are interesting relations between DSP-SIFT and convolutional architectures, explored in [40, 41].

Domain-size pooling, and regular sampling of scale “unhinged” from the spatial frequencies of the signal is divorced from scale selection principles, rooted in scale-space theory, wavelets and harmonic analysis. There, the goal is to reconstruct a signal, with the focus on photometric nuisances (additive noise). In our case, the size of the domain where images correspond depends on the three-dimensional shape of the underlying scene, and visibility (occlusion) relations, and has little to do with the spatial frequencies or “appearance” of the scene. Thus, we do away with the linking of domain size and spatial frequency (“uncertainty principle”, Fig. 9).

DSP can be easily extended to other descriptors, such as HOG, SURF, CHOG, including those supported on structured domains such as DPMs [15], and to network architectures such as convolutional neural networks and scattering networks [6], opening the door to multiple extensions of the present work. In addition, a number of interesting open theoretical questions can now be addressed using the tools of classical sampling theory, given the novel interpretation of SIFT and its variants introduced in this paper.

Figure 9: The “uncertainty principle” links the size of the domain of a filter (ordinate) to its spatial frequency (abscissa): As the data is analyzed for the purpose of compression, regions with high spatial frequency must be modeled at small scale, while regions with smaller spatial frequency can be encoded at large scale. When the task is correspondence, however, the size of the co-visible domain is independent of the spatial frequency of the scene within. While approaches using “dense SIFT” forgo the detector and compute descriptors at regularly sampled locations and scales, they perform spatial pooling by virtue of the descriptor, but fail to perform pooling across scales, as we propose.
Figure 10: The discriminative power of a descriptor (e.g., mAP of SIFT) increases with the size of the domain, but so does the probability of straddling an occlusion and the approximation error of the imaging model implicit in the detector/descriptor. This effect, which also depends on the base size, is most pronounced when occlusions are present, but is present even on the Oxford dataset, shown above.

Acknowledgments

We are thankful to Nikolaos Karianakis for conducting the comparison with various forms of CNNs, and to Philipp Fischer, Alexey Dosovitskiy and Thomas Brox for sharing their dataset, evaluation protocol and comments. Research sponsored in part by NGA HM02101310004, leveraging on theoretical work conducted under the aegis of ONR N000141110863, NSF RI-1422669, ARO W911NF-11-1-0391, and FA8650-11-1-7156.

References

  • [1] R. R. Bahadur. Sufficiency and statistical decision functions. Annals of Mathematical Statistics, 25(3), pages 423–462, 1954.
  • [2] H. Bay, T. Tuytelaars, and L. V. Gool. Surf: Speeded up robust features. In

    Proc. of the European Conference on Computer Vision (ECCV)

    , pages 404–417, Springer, 2006.
  • [3] A. Berg and J. Malik. Geometric blur for template matching. In

    Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR)

    , IEEE, 2001.
  • [4] Y. L. Boureau, J. Ponce, and Y. LeCun. A theoretical analysis of feature pooling in visual recognition. In

    Proc. of the International Conference on Machine Learning (ICML)

    , pages 111–118, 2010.
  • [5] J. V. Bouvrie, L. Rosasco, and T. Poggio. On invariance in hierarchical models. In Advances in Neural Information Processing Systems (NIPS), pages 162–170, 2009.
  • [6] J. Bruna and S. Mallat. Classification with scattering operators. In Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2011.
  • [7] T. Chan and S. Esedoglu. Aspects of Total Variation Regularized L1 Function Approximation. SIAM Journal on Applied Mathematics, 65(5), page 1817, 2005.
  • [8] V. Chandrasekhar, G. Takacs, D. Chen, S. Tsai, R. Grzeszczuk, and B. Girod. Chog: Compressed histogram of gradients a low bit-rate feature descriptor. In Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 2504–2511, IEEE, 2009.
  • [9] C. Chen and H. Edelsbrunner. Diffusion runs low on persistence fast. In Proc. of the International Conference on Computer Vision (ICCV), pages 423–430, IEEE, 2011.
  • [10] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2005.
  • [11] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 248–255, IEEE, 2009.
  • [12] J. Dong, N. Karianakis, D. Davis, J. Hernandez, J. Balzer, and S. Soatto. Multi-view feature engineering and learning. In Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2015.
  • [13] A. Dosovitskiy, J. T. Springenberg, and T. Brox. Unsupervised feature learning by augmenting single images. ArXiv preprint:1312.5242, 2013.
  • [14] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Scene parsing with multiscale feature learning, purity trees, and optimal covers. ArXiv preprint:1202.2160, 2012.
  • [15] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 1–8, IEEE, 2008.
  • [16] P. Fischer, A. Dosovitskiy, and T. Brox. Descriptor matching with convolutional neural networks: a comparison to sift. ArXiv preprint:1405.5769, 2014.
  • [17] V. Fragoso, P. Sen, S. Rodriguez, and M. Turk. Evsac: Accelerating hypotheses generation by modeling matching scores with extreme value theory. In Proc. of the International Conference on Computer Vision (ICCV), pages 2472–2479, IEEE, 2013.
  • [18] Y. Gong, L. Wang, R. Guo, and S. Lazebnik. Multi-scale orderless pooling of deep convolutional activation features. ArXiv preprint:1403.1840, 2014.
  • [19] V. Guillemin and A. Pollack. Differential Topology. Prentice-Hall, 1974.
  • [20] P. Hamel, S. Lemieux, Y. Bengio, and D. Eck. Temporal pooling and multiscale learning for automatic annotation and ranking of music audio. In Proc. of the International Society of Music Information Retrieval, pages 729–734, 2011.
  • [21] V. Hassne, T.and Mayzels and L. Zelnik-Manor. On sifts and their scales. In Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 1522–1528, IEEE, 2012.
  • [22] Y. Jia, C. Huang, and T. Darrell. Beyond spatial pyramids: Receptive field learning for pooled image features. In Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 3370–3377, IEEE, 2012.
  • [23] Y. LeCun. Learning invariant feature hierarchies. In Proc. of the European Conference on Computer Vision (ECCV), pages 496–505, Springer, 2012.
  • [24] T. Lee and S. Soatto. Learning and matching multiscale template descriptors for real-time detection, localization and tracking. In Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 1457–1464, IEEE, 2011.
  • [25] T. Lee and S. Soatto. Video-based descriptors for object recognition. In Image and Vision Computing, 29(10):639–652, 2011.
  • [26] T. Lindeberg. Principles for automatic scale selection. Technical Report, KTH, Stockholm, CVAP, 1998.
  • [27] D. G. Lowe. Distinctive image features from scale-invariant keypoints. In International Journal of Computer Vision, 2(60), pages 91–110, Springer, 2004.
  • [28] J. Matas, O. Chum, M.Urban, and T. Pajdla. Robust wide baseline stereo from maximally stable extremal regions. In Proc. of the British Machine Vision Conference (BMVC), 2002.
  • [29] R. Memisevic. Learning to relate images. In IEEE Trans. on Pattern Analysis and Machine Intelligence., 35(8):1829–1846, 2013.
  • [30] K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. In IEEE Trans. on Pattern Analysis and Machine Intelligence., pages 1615–1630, 2005.
  • [31] K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool. A comparison of affine region detectors. In International Journal of Computer Vision, 1(60):63–86, Springer, 2004.
  • [32] P. Moreels and P. Perona. Evaluation of features detectors and descriptors based on 3d objects. In International Journal of Computer Vision, 73(3):263–284, Springer, 2007.
  • [33] M. Ranzato, F. J. Huang, Y.-L. Boureau, and Y. LeCun. Unsupervised learning of invariant feature hierarchies with applications to object recognition. In Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 1–8, IEEE, 2007.
  • [34] T. Serre, A. Oliva, and T. Poggio. A feedforward architecture accounts for rapid categorization. Proceedings of the National Academy of Sciences, 104(15), pages 6424–6429, 2007.
  • [35] J. Shao. Mathematical Statistics. Springer Verlag, 1998.
  • [36] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep fisher networks for large-scale image classification. In Advances in Neural Information Processing Systems (NIPS), pages 163–171, 2013.
  • [37] K. Simonyan, A. Vedaldi, and A. Zisserman. Learning local feature descriptors using convex optimisation. In IEEE Trans. on Pattern Analysis and Machine Intelligence., 2(4), 2014.
  • [38] J. Sivic and A. Zisserman. Video google: A text retrieval approach to object matching in videos. In Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 1470–1477. IEEE, 2003.
  • [39] S. Soatto. Steps towards a theory of visual information: Active perception, signal-to-symbol conversion and the interplay between sensing and control. ArXiv preprint: 1110.2053, 2010.
  • [40] S. Soatto and A. Chiuso. Visual scene representations: Sufficiency, minimality, invariance and deep approximation. ArXiv preprint: 1411.7676, 2014.
  • [41] S. Soatto, J. Dong, and N. Karianakis. Visual scene representations: Contrast, scaling and occlusion. ArXiv preprint: 1412.6607, 2014.
  • [42] G. Sundaramoorthi, P. Petersen, V. S. Varadarajan, and S. Soatto. On the set of images modulo viewpoint and contrast changes. In Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2009.
  • [43] J. Susskind, R. Memisevic, G. E. Hinton, and M. Pollefeys. Modeling the joint density of two images under a variety of transformations. In Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 2793–2800, IEEE, 2011.
  • [44] M. Tau and T. Hassner. Dense correspondences across scenes and scales. ArXiv preprint:1406.6323, 2014.
  • [45] G. W. Taylor, R. Fergus, Y. LeCun, and C. Bregler. Convolutional learning of spatio-temporal features. In Proc. of the European Conference on Computer Vision (ECCV), pages 140–153, Springer, 2010.
  • [46] A. Vedaldi and B. Fulkerson. Vlfeat: An open and portable library of computer vision algorithms. In Proc. of the International Conference on Multimedia, pages 1469–1472, ACM, 2010.
  • [47] C. Vondrick, A. Khosla, T. Malisiewicz, and A. Torralba. Hog-gles: Visualizing object detection features. In Proc. of the International Conference on Computer Vision (ICCV), IEEE, 2013.
  • [48] S. Winder and M. Brown. Learning local image descriptors. In Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 1–8, IEEE, 2007.

Appendix A Relation to Sampling Theory

This first section summarizes the background needed for the derivation, reported in the next section.

a.1 Sampling and aliasing

In this section we refer to a general scalar signal , for instance the projection of the albedo of the scene onto a scanline. We define a detector to be a mechanism to select samples , and a descriptor to be a statistic computed from the signal of interest and associated with the sample . In the simplest case, is regularly sampled, so the detector does not depend on the signal, and the descriptor is simply the value of the function at the sample . Other examples include:

a.1.1 Regular sampling (Shannon ’49)

The detector is trivial: is a lattice, independent of . The descriptor is a weighted average of in a neighborhood of fixed size (possibly unbounded) around : . Neither the detector nor the descriptor function depend on (although the value of the latter, of course, does).

If the signal was band-limited, Shannon’s sampling theory would offer guarantees on the exact reconstruction of from its sampled representation . Unfortunately, the signals of interest are not band-limited (images are discontinuous), and therefore the reconstruction can only approximate . Typically, the approximation include “alien structures,” i.e., spurious extrema and discontinuities in that do not exist in . This phenomenon is known as aliasing. To reduce its effects, one can replace the original data with another that is (closer to) band-limited and yet close to , so that the samples can encode free of aliasing artifacts. The conflicting requirements of faithful approximation of and restriction on bandwidth trade off discriminative power (reconstruction error) with complexity, which is one of the goals of communications engineering. This tradeoff can be optimized by choice of anti-aliasing operator, that is the function that produces from , usually via convolution with a low-pass filter. In our context, we seek for a tradeoff between discriminative power and sensitivity to nuisance factors. This will come naturally when anti-aliasing is performed with respect to the action of nuisance transformations.

Figure 11: Detector specificity vs. descriptor sensitivity. (Left) Change of detector response (red) as a function of scale, computed around the optimal location and scale (here corresponding to a value of ), and corresponding change of descriptor value (blue). An ideal detector would have high specificity (sharp maximum around the true scale) and an ideal descriptor would have low sensitivity (broad minimum around the same). The opposite is true. This means that it is difficult to precisely select scale, and selection error results in large changes in the descriptor. Experiments are for the DoG detector and identity descriptor. Referring to the notation in Appendix (see details therein), (middle) template (red) and target (blue). (Right) corresponding scale-space . Note that the maximum detector response may even not correspond to the true location. The jaggedness of the response is an aliasing artifact.

a.1.2 Adaptive sampling (Landau ’67)

The detector could be “adapted” to by designing a functional that selects samples . Typically, spatial frequencies of modulate the length of the interval . A special case of adaptive sampling that does not requires stationarity assumptions is described next. The descriptor may also depend on , e.g.,by making the statistic depend on a neighborhood of variable size : .

a.1.3 Tailored sampling (Logan ’77)

For signals that are neither stationary nor band-limited, we can leverage on the violations of these assumptions to design a detector. For instance, if contains discontinuities, the detector can place samples at discontinuous locations (“corners”). For band-limited signals, the detector can place samples at critical points (maxima, or “blobs”, minima, saddles). A (location-scale) co-variant detector is a functional whose zero-level sets

(3)

define isolated (but typically multiple) samples of scales and locations locally as a function of via the implicit function theorem [19], in such a way that if is transformed, for instance via a linear operator depending on location and scale parameters, , then so are the samples: .

The associated descriptor can then be any function of the image in the reference frame defined by the samples , the most trivial being the restriction of the original function to the neighborhood . This, however, does not reduce the dimensionality of the representation. Other descriptors can compute statistics of the signal in the neighborhood, or on the entire line. Note that descriptors could have different dimensions for each .

Figure 12: Aliasing: (Top left) A random row is selected as the target and re-scaled to yield the orbit ; a subset of , cropped, re-scaled, and perturbed with noise, is chosen as the template . The distance between and is shown in red (right) as a function of scale. The same exercise is repeated for different sub-sampling of , and rescaled for display either as a mesh (middle left) or heat map (right) that clearly show aliasing artifacts along the optimal ridge. Anti-aliasing scale (bottom) produces a cleaner ridge (left, right). The net effect of anti-aliasing has been to smooth the matching score (top-right, in blue) but without computing it on a fine grid. Note that the valley of the minimum is broader, denoting decreased sensitivity to scale, and the value is somewhat higher, denoting a decreased discriminative power and risk of aliasing if the value raises above that of other local minima.

a.1.4 Anti-aliasing and “pooling”

In classical sampling theory, anti-aliasing refers to low-pass filtering or smoothing that typically444This central tenet of scale-space theory only holds for scalar signals. Nevertheless, genetic effects have been shown to be rare in two-dimensional Gaussian scale-space [9]. does not cause genetic phenomena (spurious extrema, or aliases, appearing in the reconstruction of the smoothed signal.) Of course, anti-aliasing typically has destructive effects, in the sense of eliminating extrema that are instead present in the original signal.

A side-effect of anti-aliasing, which has implications when the goal is not to reconstruct, but to detect or localize a signal, is to reduce the sensitivity of the relevant variable (descriptor) to variations of the samples (detector). If we sample translations, , and just store , an arbitrarily small translation of the sample can cause an arbitrarily large variation in the representation , when is a discontinuity. So, the sensitivity . An anti-aliasing operator should reduce sensitivity to translation: . Of course, this could be trivially achieved by choosing for any . The goal is to trade off sensitivity with discriminative power. For the case of translation, this tradeoff has been described in [6]. However, similar considerations holds for scale and domain-size sampling.

Appendix B Derivation of DSP-SIFT

The derivation of DSP-SIFT and its extensions follows a series of steps summarized as follows:

  • We start from the correspondence, or matching, task: Classify a given datum

    (test image, or target) as coming from one of model classes, each represented by an image (training images, or templates), with .

  • Both training and testing data are affected by nuisance variability due to changes of (i) illumination (ii) vantage point and (iii) partial occlusion. The former is approximated by local contrast transformations (monotonic continuous changes of intensity values), a maximal invariant to which is the gradient orientation. Vantage point changes are decomposed as a translation parallel to the image plane, approximated by a planar translation of the image, and a translation orthogonal to it, approximated by a scaling of the image. Partial occlusions determine the shape of corresponding regions in training and test images, which are approximated by a given shape (say a circle, or square) of unknown size (scale). These are very crude approximations but nevertheless implicit to most local descriptors. In particular, camera rotations are not addressed in this work, although others have done so [12].

  • Solving the (local) correspondence problem amounts to an -hypothesis testing problem, including the background class. Nuisance (i) is eliminated at the outset by considering gradient orientation instead of image intensity. Dealing with nuisances (ii)–(iii) requires searching across all (continuous) translations, scales, and domain sizes.

  • The resulting matching function must be discretized for implementation purposes. Since the matching cost is quadratic in the number of samples, sampling should be reduced to a minimum, which in general introduces artifacts (“aliasing”).

  • Anti-aliasing operators can be used to reduce the effects of aliasing artifacts. For the case of (approximations of) the likelihood function, such as SIFT, anti-aliasing corresponds to marginalizing residual nuisance transformations, which in turn corresponds to pooling gradient orientations across different locations, scales and domain sizes.

  • The samples can be thought of as a special case of “deformation hypercolumns” [39] (samples with respect to the orientation group) with the addition of the size-space semi-group (Fig. 8). Most importantly, the samples along the group are anti-aliased, to reduce the effects of structural perturbations.

b.1 Formalization

For simplicity, we formalize the matching problem for a scalar image (a scanline), and neglect contrast changes for now, focusing on the location-scale group and domain size instead.

Let , with possible models (templates, or ideal training images). The data (test image) is with each sample obtained from one of the via translation by , scaling by , and sampling with interval , if is in the visible domain . Otherwise, the scene is occluded and has nothing to do with it.

The forward model that, given and all nuisance factors , generates the data, is indicated as follows: If , then

(4)

where is a sample of a white, zero-mean Gaussian random variable with variance . Otherwise, , and is a realization of a process independent of (the “background”). The operator is linear555

can be written as an integral on the real line using the characteristic function

or a more general sampling kernel , for instance a Gaussian with zero-mean and standard deviation . Then we have
(5)
and given by

(6)

where is a region corresponding to a pixel centered at . Matching then amount to a hypothesis testing problem on whether a given measured is generated by any of the – under suitable choice of nuisance parameters – or otherwise is just labeled as background:

(7)

and the alternate hypothesis is simply . If the background density is unknown, the likelihood ratio test reduces to the comparison of the product on the right-hand side to a threshold, typically tuned to the ratio with the second-best match (although some recent work using extreme-value theory improves this [17]). In any case, the log-likelihood for points in the interval can be written as

(8)

which will have to be minimized for all pixels and templates , of which there is a finite number. However, it also has to be minimized over the continuous variables . Since is in general neither convex nor smooth as a function of these parameters, analytical solutions are not possible. Discretizing these variables is necessary,666Coarse-to-fine, homotopy-based methods or jump-diffusion processes can alleviate, but not remove, this burden. and since the minimization amounts to a search in dimensions, we seek for methods to reduce the number of samples with respect to the arguments as much as possible.

There are many ways to sample, some described in Sect. A.1, so several questions are in order: (a) How should each variable be sampled? Regularly or adaptively? (b) If sampled regularly, when do aliasing phenomena occur? Can anti-aliasing be performed to reduce their effects? (c) The search is jointly over and , and given one pair, it is easy to optimize over the other. Can these two be “separated”? (d) Is it possible to quantify and optimize the tradeoff between the number of samples and classification performance? Or for a given number of samples develop the “best” anti-aliasing (“descriptor”)? (e) For a histogram descriptor, how is “anti-aliasing” accomplished?

b.2 Common approaches and their rationale

Concerning question (a) above, most approaches in the literature perform tailored sampling (Sect. A.1.3) of both and , by deploying a location-scale covariant detector [27]

. When time is not a factor, it is common to forgo the detector and compute descriptors “densely” (a misnomer) by regularly subsampling the image lattice, or possibly undersampling by a fixed “stride.” Sometimes, scale is also regularly sampled, typically at far coarser granularity than the scale-space used for scale selection, for obvious computational reasons. In general, regular sampling requires assumptions on band limits. The function

is not band-limited as a function of . Therefore, tailored sampling (detector/descriptor) is best suited for the translation group.777Purported superiority of “dense SIFT” (regularly sampled at thousands of location) compared to ordinary SIFT (at tens or hundreds of detected location), as reported in few empirical studies, is misleading as comparison has to be performed for a comparable number of samples. We will therefore assume that has been tailor-sampled (detected, or canonized), but only up to a localization error. Without loss of generality we assume the sample is centered at zero, and the residual translation is in the neighborhood of the origin. In Fig. 11 we show that the sensitivity to scale of a common detector (DoG), which should be high, and is instead lower than the sensitivity of the resulting descriptor, which should be low. Therefore, small changes in scale cause large changes in scale sample localization, which in turn cause large changes in the value of the descriptor. Therefore, we forgo scale selection, and instead finely sample scale. This causes complexity issues, which prompt the need to sub-sample, and correspondingly to anti-alias or aggregate across scale samples. Alternatively, as done in Sect. 4, we can have a coarse adaptive or tailored sampling of scales, and then perform fine-scale sampling and anti-aliasing around the (multiple) selected scales.

Concerning (b), anti-aliasing phenomena appear as soon as Nyquist’s conditions are violated, which is almost always the case for scale and domain-size (Fig. 12). While most practitioners are reluctant to down-sample spatially, leaving millions of locations to test, it is rare for anyone to employ more than a few tens of scales, corresponding to a wild down-sampling of scale-space. This is true a fortiori for domain-size, where the domain size is often fixed, say to or locations [16]. And yet, spatial anti-aliasing is routinely performed in most descriptors, whereas none – to the best of our knowledge – perform scale or domain-size anti-aliasing. Anti-aliasing should ideally decrease the sensitivity of the descriptor, without excessive loss of discriminative power. This is illustrated in Fig. 12.

For (c), we make the choice of fixing the domain size in the target (test) image, and regularly sampling scale and domain-size, re-mapping each to the domain size of the target (Fig. 1). For comparison with [16], we choose this to be . While the choice of fixing one of the two domains entails a loss, it can be justified as follows: Clearly, the hypothesis cannot be tested independently on each datum . However, testing on any subset of the “true inlier set” reduces the power

, but not the validity, of the test. Vice-versa, using a “superset” that includes outliers invalidates the test. However, a small percentage of outliers can be managed by considering a robust (Huber) norm

instead of the norm. Therefore, one could consider the sequential hypothesis testing problem, starting from each as an hypothesis, then “growing” the region by one sample, and repeating the test. Note that the optimization has to be solved at each step.888In this interpretation, the test can be thought of as a setpoint change detection problem. Another interpretation is that of (binary) region-based segmentation, where one wishes to classify the range of a function into two classes, with values coming from either or the background, but the thresholds is placed on the domain of the function . Of course, the statistics used for the classification depend on so this has to be solved as an alternating minimization, but it is a convex one [7]. As a first-order approximation, one can fix the interval and accept a less powerful test (if that is a subset of the actual domain) or a test corrupted by outliers (if it is a superset). This is, in fact, done in most local feature-based registration or correspondence methods, and even in region-based segmentation of textures, where statistics must be pooled in a region.

While (d) is largely an open question, (e) follows directly from classical sampling considerations, as described in Sect. A.1.

b.3 Anti-aliasing descriptors

In the case of matching images under nuisance variability, it has been shown [12] that the ideal descriptor computed at a location is not a vector, but a function that approximates the likelihood, where the nuisances are marginalized. In practice the descriptor is approximated with a regularized histogram, similar to SIFT (1). In this case, anti-aliasing corresponds to a weighted average across different locations, scales and domain sizes. But the averaging in this case is simply accomplished by pooling the histogram across different locations and domain-sizes, as in (2). The weight function can be design to optimize the tradeoff between sensitivity and discrimination, although in Sect. 4 we use a simple uniform weight.

To see how pooling can be interpreted as a form of generalized anti-aliasing, consider the function sampled on a discretized domain and a neighborhood (for instance the sampling interval). The pooled histogram is

(9)

whereas the anti-aliased signal (for instance with respect to the pillbox kernel) is

(10)

The latter can be obtained as the mean of the former

(11)

although former can be used for purposes other than computing the mean (which is the best estimate under Gaussian () uncertainty), for instance to compute the median (corresponding to the best estimate under uncertainty measured by the norm), or the mode:

(12)

The approximation is accurate only to the extent in which the underlying distribution is stationary and ergodic (so the spatially pooled histogram approaches the density), but otherwise it is still a generalization of the weighted average or mean.

This derivation also points the way to how a descriptor can be used to synthesize images: Simply by sampling the descriptor, thought of as a density for a given class [12, 47]. It also suggests how descriptors can be compared: Rather than computing descriptors in both training and test images, a test datum can just be fed to the descriptor, to yield the likelihood of a given model class [15], without computing the descriptor in the test image.

Appendix C Effect of the detector on the descriptor

A detector is a function of the data that returns an element of a chosen group of transformations, the most common being translation (e.g., FAST), translation-scale (e.g., SIFT), similarity (e.g., SIFT combined the the direction of maximum gradient), affine (e.g., Harris-affine). Once transformed by the (inverse of) the detected transformation, the data is, by construction, invariant to the chosen group. If that was the only nuisance affecting the data, there would be no need for a descriptor, in the sense that the data itself, in the reference frame determined by any co-variant detector, is a maximal invariant to the nuisance group.

However, often the chosen group only captures a small subset of the transformations undergone by the data. For instance, all the groups above are only coarse approximations of the deformations undergone by the domain of an image under a change of viewpoint [42]. Furthermore, there are transformations affecting the range of the data (image intensity) that are not captured by (most) co-variant detectors. The purpose of the descriptor is to reduce variability to transformations that are not captured by the detector, while retaining as much as possible of the discriminative power of the data.

In theory, so long as descriptors are compared using the same detector, the particular choice of detector should not affect the comparison. In practice, there are many second-order effect where quantization and unmodeled phenomena affect different descriptors in different manners. Moreover, the choice of detector could affect different descriptors in different ways. The important aspect of the detector, however, is to determine a co-variant reference frame where the descriptor should be computed.

In standard SIFT, image gradient orientations are aggregated in selected regions of scale-space. Each region is defined in the octave corresponding to the selected scale, centered at the selected pixel location, where the selection is determined by the SIFT detector. Although the size of the original image subtended by each region varies depending on the selected scale (from few to few hundred pixels), the histogram is aggregated in regions that have constant size across octaves (the sizes are slightly different within each octave to subtend a constant region of the image). These are design parameters. For instance, in VLFeat they are assigned by default to . In a different scale-space implementation, one could have a single design parameter, which we can call “base size” for simplicity.

In comparing with a convolutional neural network, Fisher et al. [16] chose patches of size and (which we call ) in images of maximum dimension . This choice is made for convenience in order to enable using pre-trained networks. They use MSER to detect candidate regions for testing, rather than SIFT’s detector. However, rather than using the size of the original MSER to determine the octave where SIFT should be computed, they pre-process all patches to size . As a result, all SIFT descriptors are computed at the same octave , rather than at the scale determined by the detector. This short-changes SIFT, as some descriptors are computed in regions that are too small relative to their scale, and others too large.

Appendix D Choice of domain for comparison with CNNs

One way to correct this bias would be to use as the base size. However, this would yield an even worse (dataset-induced) bias: A base size of in images of maximum dimension means that any feature detected at higher octaves encompasses the entire image. While discriminative power increases with size, so does the probability of straddling an occlusion: The power of a local descriptor increases with size only up to a point, where occlusion phenomena become dominant (Fig. 10). This phenomenon is evident even in Oxford and Fisher’s datasets despite them being free of any occlusion phenomena. Note that while Fisher et al. allow regions of size smaller than to be detected (and scale them up to that size), in effect anything smaller than is considered at the native resolution, whereas using the SIFT detector would send anything larger than to a higher octave.

A more sound way to correct the bias is to use the detector in its proper role, that is to determine a reference frame with respect to which the descriptor is computed. For the case of MSER, this consists of affine transformations. Therefore, the region where the descriptor is computed is centered, oriented, skewed and

scaled depending on the area of the region detected. Rather than arbitrarily fixing the scale by choosing a size to which to re-scale all patches, regions of different size are selected, and then each assigned a scale which is equal its area divided by the base size . That would determine the octave where SIFT is computed.

In any case, regardless of what detector is used, DSP-SIFT is all about where to compute the descriptor: Instead of being computed just at the selected size, however it is chosen, it should be computed for multiple domain sizes. But scales have to be selected and mapped to the corresponding location in scale-space. There, SIFT aggregates gradient orientation at a single scale, whereas DISP-SIFT aggregates at multiple scales.

Figure 13: Unidirectionality of mapping over scale. Given two matching patches, one at high resolution, one at low resolution, comparison can be performed by mapping the high-resolution image to low-resolution by downsampling, or vice-versa mapping the low-resolution to high-resolution by upsampling and interpolation. Scale-space theory suggests that comparison should be performed at the lower resolution, since structures present at the high resolution cannot be re-created by upsampling and interpolation. The figure shows matching distance for matching high-to-low, and low-to-high (average for random image patches in the Oxford dataset). This is why one should not choose a base region that is too large: That would cause all smaller regions to be upsampled and interpolated, to the detriment of matching scores. Note that computing descriptors at the native resolution, instead of the corresponding octave in scale-space, is equivalent to choosing a larger base region.
Figure 14: Performance for varying choice of base size. The base size determines the direction in which comparison over scale is performed: Larger regions are mapped down-scale, correctly. Smaller regions are mapped up-scale, to the detriment of the matching score. In theory, the larger the base size the better, up to the point where it impinges on occlusion phenomena. This explains the diminishing return behavior shown above. Different base size also affects what normalization threshold should be. We observe that a smaller threshold gives better performance with the most widely used base size () default in VLFeat [46].