[CVPR16] Accumulated Stability Voting: A Robust Descriptor from Descriptors of Multiple Scales
We introduce a simple modification of local image descriptors, such as SIFT, based on pooling gradient orientations across different domain sizes, in addition to spatial locations. The resulting descriptor, which we call DSP-SIFT, outperforms other methods in wide-baseline matching benchmarks, including those based on convolutional neural networks, despite having the same dimension of SIFT and requiring no training.READ FULL TEXT VIEW PDF
We propose a novel method of deep spatial matching (DSM) for image retri...
Learned local descriptors based on Convolutional Neural Networks (CNNs) ...
Spatial pooling is an important step in computer vision systems like
In this paper, we present a novel approach that exploits the information...
Latest results indicate that features learned via convolutional neural
Dense local descriptors and machine learning have been used with success...
We present a novel descriptor, called deep self-convolutional activation...
[CVPR16] Accumulated Stability Voting: A Robust Descriptor from Descriptors of Multiple Scales
Implementation of dsp sift in c++ (in development)
Local image descriptors, such as SIFT  and its variants, are designed to reduce variability due to illumination and vantage point while retaining discriminative power. This facilitates finding correspondence between different views of the same underlying scene. In a wide-baseline matching task on the Oxford benchmark [30, 31], nearest-neighbor SIFT descriptors achieve a mean average precision (mAP) of , a improvement over direct comparison of normalized grayscale values. Other datasets yield similar results . Functions that reduce sensitivity to nuisance variability can also be learned from data [29, 33, 43, 45, 48]. Convolutional neural networks (CNNs) can been trained to “learn away” nuisance variability while retaining class labels using large annotated datasets. In particular,  uses (patches of) natural images as surrogate classes and adds transformed versions to train the network to discount nuisance variability. The activation maps in response to image values can be interpreted as a descriptor and used for correspondence. [13, 16] show that the CNN outperforms SIFT, albeit with a much larger dimension. Here we show that a simple modification of SIFT, obtained by pooling gradient orientations across different domain sizes (“scales”), in addition to spatial locations, improves it by a considerable margin, also outperforming the best CNN. We call the resulting descriptor “domain-size pooled” SIFT, or DSP-SIFT.
Pooling across different domain sizes is implemented in few lines of code, can be applied to any histogram-based method (Sect. 3), and yields a descriptor of the same size that outperforms the original essentially uniformly (Fig. 4
). Yet combining histograms of images of different sizes is counterintuitive and seemingly at odds with the teachings of scale-space theory and the resulting established practice ofscale selection  (Sect. 1.1). It is, however, rooted in classical sampling theory and anti-aliasing. Sect. 2 describes what we do, Sect. 3 how we do it, and Sect. 5 why we do it. Sect. 4 validates our method empirically.
where is the image restricted to a square domain, centered at a location with size in the lattice determined by the response to a difference-of-Gaussian (DoG) operator across all locations and scales (SIFT detector). Here , is the independent variable, ranging from to , corresponding to an orientation histogram bin of size , and is the spatial pooling scale. The kernel is bilinear of size and separable-bilinear of size 
, although they could be replaced by a Gaussian with standard deviationand an angular Gaussian with dispersion parameter . The SIFT descriptor is the concatenation of cells (1) computed at locations on a lattice , and normalized.
The spatial pooling scale and the size of the image domain where the SIFT descriptor is computed are tied to the photometric characteristics of the image, since is derived from the response of a DoG operator on the (single) image.111Approaches based on “dense SIFT” forgo the detector and instead compute descriptors on a regular sampling of locations and scales (Fig. 9). However, no existing dense SIFT method performs domain-size pooling. Such a response depends on the reflectance properties of the scene and optical characteristics and resolution of the sensor, neither of which is related to the size and shape of co-visible (corresponding) regions. Instead, how large a portion of a scene is visible in each corresponding image(s) depends on the shape of the scene, the pose of the two cameras, and the resulting visibility (occlusion) relations. Therefore, we propose to untie the size of the domain where the descriptor is computed (“scale”) from photometric characteristics of the image, departing from the teachings of scale selection (Fig. 8). Instead, we use basic principles of classical sampling theory and anti-aliasing to achieve robustness to domain size changes due to occlusions (Sect. 5).
Pooling is commonly understood as the combination of responses of feature detectors/descriptors at nearby locations, aimed at transforming the joint feature representation into a more usable one that preserves important information (intrinsic variability) while discarding irrelevant detail (nuisance variability) [4, 22]. However, precisely how pooling trades off these two conflicting aims is unclear and mostly addressed empirically in end-to-end comparisons with numerous confounding factors. Exceptions include 
, where intrinsic and nuisance variability are combined and abstracted into the variance and distance between the means of scalar random variables in a binary classification task. For more general settings, the goals of reducing nuisance variability while preserving intrinsic variability is elusive as asingle image does not afford the ability to separate the two .
An alternate interpretation of pooling as anti-aliasing  clearly highlights its effects on intrinsic and nuisance variability: Because one cannot know what portion of an object or scene will be visible in a test image, a scale-space (“semi-orbit”) of domain sizes (“receptive fields”) should be marginalized or searched over (“max-out”). Neither can be computed in closed-form, so the semi-orbit has to be sampled. To reduce complexity, only a small number of samples should be retained, resulting in undersampling and aliasing phenomena that can be mitigated by anti-aliasing, with quantifiable effects on the sensitivity to nuisance variability. For the case of histogram-based descriptors, anti-aliasing planar translations consists of spatial pooling, routinely performed by most descriptors. Anti-aliasing visibility results in domain-size aggregation, which no current descriptor practices. This interpretation also offers a way to quantify the effects of pooling on discriminative (reconstruction) power directly, using classical results from sampling theory, rather than indirectly through an end-to-end classification experiment that may contain other confounding factors.
Domain-size pooling can be applied to a number of different descriptors or convolutional architectures. We illustrate its effects on the most popular, SIFT. However, we point out that proper marginalization requires the availability of multiple images of the same scene, and therefore cannot be performed in a single image. While most local image descriptors are computed from a single image, exceptions include [12, 25]. Of course, multiple images can be “hallucinated” from one, but the resulting pooling operation can only achieve invariance to modeled transformations.
In neural network architectures, there is evidence that abstracting spatial pooling hierarchically, i.e., aggregating nearby responses in feature maps, is beneficial . This process could be extended by aggregating across different neighborhood sizes in feature space. To the best of our knowledge, the only architecture that performs some kind of pooling across scales is , although the justification provided in  only concerns translation within each scale. The same goes for , where pooling (low-pass filtering) is only performed within each scale, and not across scales. Other works learn the regions for spatial pooling, for instance [22, 37], but still restrict pooling to within-scale, similar to , rather than across scales as we advocate.
We distinguish multi-scale methods that concatenate descriptors computed independently at each scale, from cross-scale pooling, where statistics of the image at different scales are combined directly in the descriptor. Examples of the former include , where ordinary SIFT descriptors computed on domains of different size are assumed to belong to a linear subspace, and 
, where Fisher vectors are computed for multiple sizes and aspect ratios and spatial pooling occurs within each level. Also bag-of-word (BoW) methods, as mid-level representations, aggregate different low level descriptors by counting their frequency after discretization. Typically, vector quantization or other clustering technique is used, each descriptor is associated with a cluster center (“word”), and the frequency of each word is recorded in lieu of the descriptors themselves. This can be done for domain size, by computing different descriptors at the same location, for different domain sizes, and then counting frequencies relative to a dictionary learned from a large training dataset (Sect. 4.4).
Aggregation across time, which may include changes of domain size, is advocated in , but in the absence of formulas it is unclear how this approach relates to our work. In , weights are shared across scales, which is not equivalent to pooling, but still establishes some dependencies across scales. MTD  appears to be the first instance of pooling across scales, although the aggregation is global in scale-space with consequent loss of discriminative power. Most recently,  advocates the same but in practice space-pooled VLAD descriptors obtained at different scales are simply concatenated. Also  can be thought of as a form of pooling, but the resulting descriptor only captures the mean of the resulting distribution. In addition, 
exploits the possibility of estimating the proper scales for nearby features via scale propagation but still no pooling is performed across scales. Additional details in related prior work are discussed in AppendixA.
If SIFT is written as (1), then DSP-SIFT is given by
where is the size-pooling scale and is an exponential or other unilateral density function. The process is visualized in Fig. 1. Unlike SIFT, that is computed on a scale-selected lattice , DSP-SIFT is computed on a regularly sampled lattice . Computed on a different lattice, the above can be considered as a recipe for DSP-HOG . Computed on a tree, it can be used to extend deformable-parts models (DPM)  to DSP-DPM. Replacing with other histogram-based descriptor “X” (for instance, SURF ), the above yields DSP-X. Applied to a hidden layer of a convolutional network, it yields a DSP-CNN, or DSP-Deep-Fisher-Network . The details of the implementation are in Sect. 3.
While the implementation of DSP is straightforward, its justification is less so. We report the summary in Sect. 5 and the detailed derivation in Appendix B, that provides a theoretical justification and conditions under which the resulting descriptors are valid. In Sect. 4 we compare DSP-SIFT to alternate approaches. Motivated by the experiments of [31, 32] that compare local descriptors, we choose SIFT as a paragon and compare it to DSP-SIFT on the standard benchmark . Motivated by 
that compares SIFT to both supervised and unsupervised CNNs trained on Imagenet and Flickr respectively on the same benchmark, we submit DSP-SIFT to the same protocol. We also run the test on the new synthetic dataset introduced by , that yields the same qualitative assessment.
Clearly, domain-size pooling of under-sampled semi-orbits cannot outperform fine sampling, so if we were to retain all the scale samples instead of aggregating them, performance would further improve. However, computing and matching a large collection of SIFT descriptors across different scales would incur significantly increased computational and storage costs. To contain the latter, 
assumes that descriptors at different scales populate a linear subspace and fit a high-dimensional hyperplane. The resulting Scale-less SIFT (SLS) outperforms ordinary SIFT as shown in Fig.7. However, the linear subspace assumption breaks when considering large scale changes, so SLS is outperformed by DSP-SIFT despite the considerable difference in (memory and time) complexity.
Following other evaluation protocols, we use Maximally Stable Extremal Regions (MSER)  to detect candidate regions, affine-normalize, re-scale and align them to the dominant orientation. For a detected scale , DSP-SIFT samples scales within a neighborhood around it. For each scale-sampled patch, a single-scale un-normalized SIFT descriptor (1) is computed on the SIFT scale-space octave corresponding222This is an updated version of the protocol described in , as discussed in detail in Appendix D. to the sampled scale . By choosing to be a uniform density, these raw histograms of gradient orientations at different scales are accumulated and normalized333We follow the practice of SIFT  to normalize, clamp and re-normalize the histograms, with the clamping threshold set to 0.067 empirically. (2). Fig. 2 shows the mean average precision (defined in Sect. 4.2) for different domain size pooling ranges. Improvements are observed as soon as more than one scale is used, with diminishing return: Performance decreases with domain size pooling radius exceeding . Fig. 2 shows the effect of the number of size samples used to construct DSP-SIFT. Although the more samples the merrier, three size samples are sufficient to outperform ordinary SIFT, and improvement beyond samples is minimal. Additional samples do not further increase the mean average precision, but incur more computational cost. In the evaluation in Sect. 4, we use and . These parameters are empirically selected on the Oxford dataset [30, 31].
As a baseline, the RAW-PATCH descriptor (named following ) is the unit-norm grayscale intensity of the affine-rectified and resized patch of a fixed size ().
The standard SIFT, which is widely accepted as a paragon [30, 32], is computed using the VLFeat library . Both SIFT and DSP-SIFT are computed on the SIFT scale-space corresponding to the detected scales. Instead of mapping all patches to an arbitrarily user-defined size, we use the area of each selected and rectified MSER region to determine the octave level in the scale-space where SIFT (as well as DSP-SIFT) is to be computed.
Scale-less SIFT (SLS) is computed using the source code provided by the authors : For each selected and rectified patch, the standard SIFT descriptors are computed at scales from a scale range of , and the standard PCA subspace dimension is set to , yielding a final descriptor of dimension after a subspace-to-vector mapping.
To compare DSP-SIFT to a convolutional neural network, we use the top-performer in , an unsupervised model pre-trained on natural images undergoing transformations each (total M). The responses at the intermediate layers (CNN-L3) and (CNN-L4) are used for comparison, following . Since the network requires input patches of fixed size, we tested and report the results on both (PS69) and (PS91) as in .
Although no direct comparison with Multiscale Template Descriptors (MTD)  is performed, SLS can be considered as dominating it since it uses all scales without collapsing them into a single histogram. The derivation in Sect. 5 suggests, and empirical evidence in Fig. 2 confirms, that aggregating the histogram across all scales significantly reduces discriminative power. Sect. 4.4 compares DSP-SIFT to a BoW which pools SIFT descriptors computed at different sizes at the same location.
The Oxford dataset [30, 31] comprises pairs of images of mostly planar scenes seen under different pose, distance, blurring, compression and lighting. They are organized into categories undergoing increasing magnitude of transformations. While routinely used to evaluate descriptors, this dataset has limitations in terms of size and restriction to mostly planar scenes, modest scale changes, and no occlusions. Fischer et al.  recently introduced a dataset of pairs of images with more extreme transformations including zooming, blurring, lighting change, rotation, perspective and nonlinear transformations.
Following , we use precision-recall (PR) curves to evaluate descriptors. A match between two descriptors is called if their Euclidean distance is less than a threshold . It is then labeled as a true positive if the area of intersection over union (IoU) of their corresponding MSER-detected regions is larger than . Both datasets provide ground truth mapping between images, so the overlapping is computed by warping the first MSER region into the second image and then computing the overlap with the second MSER region. Recall is the fraction of true positives over the total number of correspondences. Precision is the percentage of true matches within the total number of matches. By varying the distance threshold , a PR curve can be generated and average precision (AP, a.k.a area under the curve, AUC) can be estimated. The average of APs provides the mean average precision (mAP) scores used for comparison.
Fig. 3 shows the behavior of each descriptor for varying degree of severity of each transformation. DSP-SIFT consistently outperforms other methods when there are large scale changes (zoom). It is also more robust to other transformations such as blur, lighting and compression in the Oxford dataset , and to nonlinear, perspective, lighting, blur and rotation in Fischer’s 
. DSP-SIFT is not at the top of the list of all compared descriptors in viewpoint change cases, although “viewpoint” is a misnomer as MSER-based rectification accounts for most of the viewpoint variability, and the residual variability is mostly due to interpolation and rectification artifacts. The fact that DSP-SIFT outperforms CNN in nearly all cases in Fischer’s dataset is surprising, considering that the neural network is trained by augmenting the dataset using similar types of transformations.
Fig. 4 shows head-to-head comparisons between these methods, in the same format of . DSP-SIFT outperforms SIFT by and on Oxford and Fischer respectively. Only on two out of pairs of images in Fischer dataset does domain-size pooling negatively affect the performance of SIFT, but the decrease is rather small. DSP-SIFT improves SIFT on every pair of images in the Oxford dataset. The improvement of DSP-SIFT comes without increase in dimension. In comparison, CNN-L4 achieves and improvements over SIFT by increasing dimension -fold. On both datasets, DSP-SIFT also consistently outperforms CNN-L4 and SLS despite its lower dimension.
To compare DSP-SIFT to BoW we computed SIFT at scales on concentric regions with dictionary sizes ranging from to , trained on over K SIFT descriptors computed on samples from ILSVRC-2013 . To make the comparison fair, the same scales are used to compute DSP-SIFT. By doing so, the only difference between these two methods is how to pool across scales rather than what or where to pool. In SIFT-BOW, pooling is performed by encoding SIFTs from nearby scales using the quantized visual dictionary, while DSP-SIFT combines the histograms of gradient orientations across scales directly. To compute similarity between SIFT-BOWs, we tested both the intersection kernel and norm, and achieved a best performance with the latter at mAP on Oxford and on Fischer. Fig. 5 shows the direct comparison between DSP-SIFT and SIFT-BOW with the former being a clear winner.
Fig. 7 shows the complexity (descriptor dimension) and performance (mAP) tradeoff. Table 1 summarizes the results. In Fig. 7, an “ideal” descriptor would achieve mAP by using the smallest possible number of bits and land at the top-left corner of the graph. DSP-SIFT has the same lowest complexity as SIFT and is the best in mAP among all the descriptors. Looking horizontally in the graph, DSP-SIFT outperforms all the other methods at a fraction of complexity. SLS achieves the second best performance but at the cost of a -fold increase in dimension. In general, the performance of CNN descriptors is worse than DSP-SIFT but, interestingly, their mAPs do not change significantly if the network responses are computed on a resampled patch of size to obtain lower dimensional descriptors.
Descriptors computed on larger domain sizes are usually more discriminative, up to the point where the domain straddles occluding boundaries (Fig. 10). When using a detector, the size of the domain is usually chosen to be a factor of the detected scale, which affects performance in a way that depends on the dataset and the incidence of occlusions. In our experiments, this parameter (dilation factor) is set at 3, following , and we note that DSP-SIFT is less sensitive than ordinary SIFT to this parameter. Since DSP-SIFT aggregates domains of various sizes (smaller and larger) around the nominal size, it is important to ascertain whether the improvement in DSP-SIFT comes from size pooling, or simply from including larger domains. To this end, we compare DSP-SIFT by pooling domain sizes from th through rd of the scale determined by the detector, to a single-size descriptor computed at the largest size (SIFT-L). This establishes that the increase in performance of DSP-SIFT over ordinary SIFT comes from pooling across domain sizes, not just by picking larger domain sizes. In the example in Fig. 6, the largest domain size yields an even worse performance than the detection scale (Fig. 6
). In a more complex scene where the test images exhibit occlusion, this will be even more pronounced as there is a tradeoff between discriminative power (calling for a larger size) and the probability of straddling an occlusion (calling for a smaller size).
1. The likelihood function of the scene given images is a minimal sufficient statistic of the latter for the purpose of answering questions on the former . Invariance to nuisance transformations induced by (semi-)group actions on the data can be achieved by representing orbits, which are maximal invariants . The planar translation-scale group can be used as a crude first-order approximation of the action of the translation group in space (viewpoint changes) including scale change-inducing translations along the optical axis. This draconian assumption is implicit in most single-view descriptors.
2. Comparing (semi-)orbits entails a continuous search (non-convex optimization) that has to be discretized for implementation purposes. The orbits can be sampled adaptively, through the use of a co-variant detector and the associated invariant descriptor, or regularly – as customary in classical sampling theory.
3. In adaptive sampling, the detector should exhibit high sensitivity to nuisance transformations (e.g., small changes in scale should cause a large change in the response to the detector, thus providing accurate scale localization) and the descriptor should exhibit small sensitivity (so small errors in scale localization cause a small change in the descriptor). Unfortunately, for the case of SIFT (DoG detector and gradient orientation histogram descriptor), the converse is true.
4. Because correspondence entails search over samples of each orbit, time complexity increases with the number of samples. Undersampling introduces structural artifacts, or “aliases,” corresponding to topological changes in the response of the detector. These can be reduced by “anti-aliasing,” an averaging operation. For the case of (approximations of) the likelihood function, such as SIFT and its variants, anti-aliasing corresponds to pooling. While spatial pooling is common practice, and reduces sensitivity to translation parallel to the image plane, scale pooling – which would provide insensitivity to translation orthogonal to the image plane – and domain-size pooling – which would provide insensitivity to small changes of visibility, are not. This motivates the introduction of DSP-SIFT, and the rich theory on sampling and anti-aliasing could provide guidelines on what and how to pool, as well as bounds on the loss of discriminative power coming from undersampling and anti-aliasing operations.
Image matching under changes of viewpoint, illumination and partial occlusions is framed as a hypothesis testing problem, which results in a non-convex optimization over continuous nuisance parameters. The need for efficient test-time performance has spawned an industry of engineered descriptors, which are computed locally so the effects of occlusions can be reduced to a binary classification (co-visible, or not). The best known is SIFT, which has been shown to work well in a number of independent empirical assessments [30, 32], that however come with little analysis on why it works, or indications on how to improve it. We have made a step in that direction, by showing that SIFT can be derived from sampling considerations, where spatial binning and pooling are the result of anti-aliasing operations. However, SIFT and its variants only perform such operations for planar translations, whereas our interpretation calls for anti-aliasing domain-size as well. Doing so can be accomplished in few lines of code and yields significant performance improvements. Such improvements even place the resulting DSP-SIFT descriptor above a convolutional neural network (CNN), that had been recently reported as a top performer in the Oxford image matching benchmark . Of course, we are not advocating replacing large neural networks with local descriptors. Indeed, there are interesting relations between DSP-SIFT and convolutional architectures, explored in [40, 41].
Domain-size pooling, and regular sampling of scale “unhinged” from the spatial frequencies of the signal is divorced from scale selection principles, rooted in scale-space theory, wavelets and harmonic analysis. There, the goal is to reconstruct a signal, with the focus on photometric nuisances (additive noise). In our case, the size of the domain where images correspond depends on the three-dimensional shape of the underlying scene, and visibility (occlusion) relations, and has little to do with the spatial frequencies or “appearance” of the scene. Thus, we do away with the linking of domain size and spatial frequency (“uncertainty principle”, Fig. 9).
DSP can be easily extended to other descriptors, such as HOG, SURF, CHOG, including those supported on structured domains such as DPMs , and to network architectures such as convolutional neural networks and scattering networks , opening the door to multiple extensions of the present work. In addition, a number of interesting open theoretical questions can now be addressed using the tools of classical sampling theory, given the novel interpretation of SIFT and its variants introduced in this paper.
We are thankful to Nikolaos Karianakis for conducting the comparison with various forms of CNNs, and to Philipp Fischer, Alexey Dosovitskiy and Thomas Brox for sharing their dataset, evaluation protocol and comments. Research sponsored in part by NGA HM02101310004, leveraging on theoretical work conducted under the aegis of ONR N000141110863, NSF RI-1422669, ARO W911NF-11-1-0391, and FA8650-11-1-7156.
Proc. of the European Conference on Computer Vision (ECCV), pages 404–417, Springer, 2006.
Proc. of the Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2001.
Proc. of the International Conference on Machine Learning (ICML), pages 111–118, 2010.
This first section summarizes the background needed for the derivation, reported in the next section.
In this section we refer to a general scalar signal , for instance the projection of the albedo of the scene onto a scanline. We define a detector to be a mechanism to select samples , and a descriptor to be a statistic computed from the signal of interest and associated with the sample . In the simplest case, is regularly sampled, so the detector does not depend on the signal, and the descriptor is simply the value of the function at the sample . Other examples include:
The detector is trivial: is a lattice, independent of . The descriptor is a weighted average of in a neighborhood of fixed size (possibly unbounded) around : . Neither the detector nor the descriptor function depend on (although the value of the latter, of course, does).
If the signal was band-limited, Shannon’s sampling theory would offer guarantees on the exact reconstruction of from its sampled representation . Unfortunately, the signals of interest are not band-limited (images are discontinuous), and therefore the reconstruction can only approximate . Typically, the approximation include “alien structures,” i.e., spurious extrema and discontinuities in that do not exist in . This phenomenon is known as aliasing. To reduce its effects, one can replace the original data with another that is (closer to) band-limited and yet close to , so that the samples can encode free of aliasing artifacts. The conflicting requirements of faithful approximation of and restriction on bandwidth trade off discriminative power (reconstruction error) with complexity, which is one of the goals of communications engineering. This tradeoff can be optimized by choice of anti-aliasing operator, that is the function that produces from , usually via convolution with a low-pass filter. In our context, we seek for a tradeoff between discriminative power and sensitivity to nuisance factors. This will come naturally when anti-aliasing is performed with respect to the action of nuisance transformations.
The detector could be “adapted” to by designing a functional that selects samples . Typically, spatial frequencies of modulate the length of the interval . A special case of adaptive sampling that does not requires stationarity assumptions is described next. The descriptor may also depend on , e.g.,by making the statistic depend on a neighborhood of variable size : .
For signals that are neither stationary nor band-limited, we can leverage on the violations of these assumptions to design a detector. For instance, if contains discontinuities, the detector can place samples at discontinuous locations (“corners”). For band-limited signals, the detector can place samples at critical points (maxima, or “blobs”, minima, saddles). A (location-scale) co-variant detector is a functional whose zero-level sets
define isolated (but typically multiple) samples of scales and locations locally as a function of via the implicit function theorem , in such a way that if is transformed, for instance via a linear operator depending on location and scale parameters, , then so are the samples: .
The associated descriptor can then be any function of the image in the reference frame defined by the samples , the most trivial being the restriction of the original function to the neighborhood . This, however, does not reduce the dimensionality of the representation. Other descriptors can compute statistics of the signal in the neighborhood, or on the entire line. Note that descriptors could have different dimensions for each .
In classical sampling theory, anti-aliasing refers to low-pass filtering or smoothing that typically444This central tenet of scale-space theory only holds for scalar signals. Nevertheless, genetic effects have been shown to be rare in two-dimensional Gaussian scale-space . does not cause genetic phenomena (spurious extrema, or aliases, appearing in the reconstruction of the smoothed signal.) Of course, anti-aliasing typically has destructive effects, in the sense of eliminating extrema that are instead present in the original signal.
A side-effect of anti-aliasing, which has implications when the goal is not to reconstruct, but to detect or localize a signal, is to reduce the sensitivity of the relevant variable (descriptor) to variations of the samples (detector). If we sample translations, , and just store , an arbitrarily small translation of the sample can cause an arbitrarily large variation in the representation , when is a discontinuity. So, the sensitivity . An anti-aliasing operator should reduce sensitivity to translation: . Of course, this could be trivially achieved by choosing for any . The goal is to trade off sensitivity with discriminative power. For the case of translation, this tradeoff has been described in . However, similar considerations holds for scale and domain-size sampling.
The derivation of DSP-SIFT and its extensions follows a series of steps summarized as follows:
We start from the correspondence, or matching, task: Classify a given datum(test image, or target) as coming from one of model classes, each represented by an image (training images, or templates), with .
Both training and testing data are affected by nuisance variability due to changes of (i) illumination (ii) vantage point and (iii) partial occlusion. The former is approximated by local contrast transformations (monotonic continuous changes of intensity values), a maximal invariant to which is the gradient orientation. Vantage point changes are decomposed as a translation parallel to the image plane, approximated by a planar translation of the image, and a translation orthogonal to it, approximated by a scaling of the image. Partial occlusions determine the shape of corresponding regions in training and test images, which are approximated by a given shape (say a circle, or square) of unknown size (scale). These are very crude approximations but nevertheless implicit to most local descriptors. In particular, camera rotations are not addressed in this work, although others have done so .
Solving the (local) correspondence problem amounts to an -hypothesis testing problem, including the background class. Nuisance (i) is eliminated at the outset by considering gradient orientation instead of image intensity. Dealing with nuisances (ii)–(iii) requires searching across all (continuous) translations, scales, and domain sizes.
The resulting matching function must be discretized for implementation purposes. Since the matching cost is quadratic in the number of samples, sampling should be reduced to a minimum, which in general introduces artifacts (“aliasing”).
Anti-aliasing operators can be used to reduce the effects of aliasing artifacts. For the case of (approximations of) the likelihood function, such as SIFT, anti-aliasing corresponds to marginalizing residual nuisance transformations, which in turn corresponds to pooling gradient orientations across different locations, scales and domain sizes.
The samples can be thought of as a special case of “deformation hypercolumns”  (samples with respect to the orientation group) with the addition of the size-space semi-group (Fig. 8). Most importantly, the samples along the group are anti-aliased, to reduce the effects of structural perturbations.
For simplicity, we formalize the matching problem for a scalar image (a scanline), and neglect contrast changes for now, focusing on the location-scale group and domain size instead.
Let , with possible models (templates, or ideal training images). The data (test image) is with each sample obtained from one of the via translation by , scaling by , and sampling with interval , if is in the visible domain . Otherwise, the scene is occluded and has nothing to do with it.
The forward model that, given and all nuisance factors , generates the data, is indicated as follows: If , then
where is a sample of a white, zero-mean Gaussian random variable with variance . Otherwise, , and is a realization of a process independent of (the “background”). The operator is linear555 can be written as an integral on the real line using the characteristic function
can be written as an integral on the real line using the characteristic functionor a more general sampling kernel , for instance a Gaussian with zero-mean and standard deviation . Then we have
where is a region corresponding to a pixel centered at . Matching then amount to a hypothesis testing problem on whether a given measured is generated by any of the – under suitable choice of nuisance parameters – or otherwise is just labeled as background:
and the alternate hypothesis is simply . If the background density is unknown, the likelihood ratio test reduces to the comparison of the product on the right-hand side to a threshold, typically tuned to the ratio with the second-best match (although some recent work using extreme-value theory improves this ). In any case, the log-likelihood for points in the interval can be written as
which will have to be minimized for all pixels and templates , of which there is a finite number. However, it also has to be minimized over the continuous variables . Since is in general neither convex nor smooth as a function of these parameters, analytical solutions are not possible. Discretizing these variables is necessary,666Coarse-to-fine, homotopy-based methods or jump-diffusion processes can alleviate, but not remove, this burden. and since the minimization amounts to a search in dimensions, we seek for methods to reduce the number of samples with respect to the arguments as much as possible.
There are many ways to sample, some described in Sect. A.1, so several questions are in order: (a) How should each variable be sampled? Regularly or adaptively? (b) If sampled regularly, when do aliasing phenomena occur? Can anti-aliasing be performed to reduce their effects? (c) The search is jointly over and , and given one pair, it is easy to optimize over the other. Can these two be “separated”? (d) Is it possible to quantify and optimize the tradeoff between the number of samples and classification performance? Or for a given number of samples develop the “best” anti-aliasing (“descriptor”)? (e) For a histogram descriptor, how is “anti-aliasing” accomplished?
. When time is not a factor, it is common to forgo the detector and compute descriptors “densely” (a misnomer) by regularly subsampling the image lattice, or possibly undersampling by a fixed “stride.” Sometimes, scale is also regularly sampled, typically at far coarser granularity than the scale-space used for scale selection, for obvious computational reasons. In general, regular sampling requires assumptions on band limits. The functionis not band-limited as a function of . Therefore, tailored sampling (detector/descriptor) is best suited for the translation group.777Purported superiority of “dense SIFT” (regularly sampled at thousands of location) compared to ordinary SIFT (at tens or hundreds of detected location), as reported in few empirical studies, is misleading as comparison has to be performed for a comparable number of samples. We will therefore assume that has been tailor-sampled (detected, or canonized), but only up to a localization error. Without loss of generality we assume the sample is centered at zero, and the residual translation is in the neighborhood of the origin. In Fig. 11 we show that the sensitivity to scale of a common detector (DoG), which should be high, and is instead lower than the sensitivity of the resulting descriptor, which should be low. Therefore, small changes in scale cause large changes in scale sample localization, which in turn cause large changes in the value of the descriptor. Therefore, we forgo scale selection, and instead finely sample scale. This causes complexity issues, which prompt the need to sub-sample, and correspondingly to anti-alias or aggregate across scale samples. Alternatively, as done in Sect. 4, we can have a coarse adaptive or tailored sampling of scales, and then perform fine-scale sampling and anti-aliasing around the (multiple) selected scales.
Concerning (b), anti-aliasing phenomena appear as soon as Nyquist’s conditions are violated, which is almost always the case for scale and domain-size (Fig. 12). While most practitioners are reluctant to down-sample spatially, leaving millions of locations to test, it is rare for anyone to employ more than a few tens of scales, corresponding to a wild down-sampling of scale-space. This is true a fortiori for domain-size, where the domain size is often fixed, say to or locations . And yet, spatial anti-aliasing is routinely performed in most descriptors, whereas none – to the best of our knowledge – perform scale or domain-size anti-aliasing. Anti-aliasing should ideally decrease the sensitivity of the descriptor, without excessive loss of discriminative power. This is illustrated in Fig. 12.
For (c), we make the choice of fixing the domain size in the target (test) image, and regularly sampling scale and domain-size, re-mapping each to the domain size of the target (Fig. 1). For comparison with , we choose this to be . While the choice of fixing one of the two domains entails a loss, it can be justified as follows: Clearly, the hypothesis cannot be tested independently on each datum . However, testing on any subset of the “true inlier set” reduces the power
, but not the validity, of the test. Vice-versa, using a “superset” that includes outliers invalidates the test. However, a small percentage of outliers can be managed by considering a robust (Huber) norminstead of the norm. Therefore, one could consider the sequential hypothesis testing problem, starting from each as an hypothesis, then “growing” the region by one sample, and repeating the test. Note that the optimization has to be solved at each step.888In this interpretation, the test can be thought of as a setpoint change detection problem. Another interpretation is that of (binary) region-based segmentation, where one wishes to classify the range of a function into two classes, with values coming from either or the background, but the thresholds is placed on the domain of the function . Of course, the statistics used for the classification depend on so this has to be solved as an alternating minimization, but it is a convex one . As a first-order approximation, one can fix the interval and accept a less powerful test (if that is a subset of the actual domain) or a test corrupted by outliers (if it is a superset). This is, in fact, done in most local feature-based registration or correspondence methods, and even in region-based segmentation of textures, where statistics must be pooled in a region.
While (d) is largely an open question, (e) follows directly from classical sampling considerations, as described in Sect. A.1.
In the case of matching images under nuisance variability, it has been shown  that the ideal descriptor computed at a location is not a vector, but a function that approximates the likelihood, where the nuisances are marginalized. In practice the descriptor is approximated with a regularized histogram, similar to SIFT (1). In this case, anti-aliasing corresponds to a weighted average across different locations, scales and domain sizes. But the averaging in this case is simply accomplished by pooling the histogram across different locations and domain-sizes, as in (2). The weight function can be design to optimize the tradeoff between sensitivity and discrimination, although in Sect. 4 we use a simple uniform weight.
To see how pooling can be interpreted as a form of generalized anti-aliasing, consider the function sampled on a discretized domain and a neighborhood (for instance the sampling interval). The pooled histogram is
whereas the anti-aliased signal (for instance with respect to the pillbox kernel) is
The latter can be obtained as the mean of the former
although former can be used for purposes other than computing the mean (which is the best estimate under Gaussian () uncertainty), for instance to compute the median (corresponding to the best estimate under uncertainty measured by the norm), or the mode:
The approximation is accurate only to the extent in which the underlying distribution is stationary and ergodic (so the spatially pooled histogram approaches the density), but otherwise it is still a generalization of the weighted average or mean.
This derivation also points the way to how a descriptor can be used to synthesize images: Simply by sampling the descriptor, thought of as a density for a given class [12, 47]. It also suggests how descriptors can be compared: Rather than computing descriptors in both training and test images, a test datum can just be fed to the descriptor, to yield the likelihood of a given model class , without computing the descriptor in the test image.
A detector is a function of the data that returns an element of a chosen group of transformations, the most common being translation (e.g., FAST), translation-scale (e.g., SIFT), similarity (e.g., SIFT combined the the direction of maximum gradient), affine (e.g., Harris-affine). Once transformed by the (inverse of) the detected transformation, the data is, by construction, invariant to the chosen group. If that was the only nuisance affecting the data, there would be no need for a descriptor, in the sense that the data itself, in the reference frame determined by any co-variant detector, is a maximal invariant to the nuisance group.
However, often the chosen group only captures a small subset of the transformations undergone by the data. For instance, all the groups above are only coarse approximations of the deformations undergone by the domain of an image under a change of viewpoint . Furthermore, there are transformations affecting the range of the data (image intensity) that are not captured by (most) co-variant detectors. The purpose of the descriptor is to reduce variability to transformations that are not captured by the detector, while retaining as much as possible of the discriminative power of the data.
In theory, so long as descriptors are compared using the same detector, the particular choice of detector should not affect the comparison. In practice, there are many second-order effect where quantization and unmodeled phenomena affect different descriptors in different manners. Moreover, the choice of detector could affect different descriptors in different ways. The important aspect of the detector, however, is to determine a co-variant reference frame where the descriptor should be computed.
In standard SIFT, image gradient orientations are aggregated in selected regions of scale-space. Each region is defined in the octave corresponding to the selected scale, centered at the selected pixel location, where the selection is determined by the SIFT detector. Although the size of the original image subtended by each region varies depending on the selected scale (from few to few hundred pixels), the histogram is aggregated in regions that have constant size across octaves (the sizes are slightly different within each octave to subtend a constant region of the image). These are design parameters. For instance, in VLFeat they are assigned by default to . In a different scale-space implementation, one could have a single design parameter, which we can call “base size” for simplicity.
In comparing with a convolutional neural network, Fisher et al.  chose patches of size and (which we call ) in images of maximum dimension . This choice is made for convenience in order to enable using pre-trained networks. They use MSER to detect candidate regions for testing, rather than SIFT’s detector. However, rather than using the size of the original MSER to determine the octave where SIFT should be computed, they pre-process all patches to size . As a result, all SIFT descriptors are computed at the same octave , rather than at the scale determined by the detector. This short-changes SIFT, as some descriptors are computed in regions that are too small relative to their scale, and others too large.
One way to correct this bias would be to use as the base size. However, this would yield an even worse (dataset-induced) bias: A base size of in images of maximum dimension means that any feature detected at higher octaves encompasses the entire image. While discriminative power increases with size, so does the probability of straddling an occlusion: The power of a local descriptor increases with size only up to a point, where occlusion phenomena become dominant (Fig. 10). This phenomenon is evident even in Oxford and Fisher’s datasets despite them being free of any occlusion phenomena. Note that while Fisher et al. allow regions of size smaller than to be detected (and scale them up to that size), in effect anything smaller than is considered at the native resolution, whereas using the SIFT detector would send anything larger than to a higher octave.
A more sound way to correct the bias is to use the detector in its proper role, that is to determine a reference frame with respect to which the descriptor is computed. For the case of MSER, this consists of affine transformations. Therefore, the region where the descriptor is computed is centered, oriented, skewed andscaled depending on the area of the region detected. Rather than arbitrarily fixing the scale by choosing a size to which to re-scale all patches, regions of different size are selected, and then each assigned a scale which is equal its area divided by the base size . That would determine the octave where SIFT is computed.
In any case, regardless of what detector is used, DSP-SIFT is all about where to compute the descriptor: Instead of being computed just at the selected size, however it is chosen, it should be computed for multiple domain sizes. But scales have to be selected and mapped to the corresponding location in scale-space. There, SIFT aggregates gradient orientation at a single scale, whereas DISP-SIFT aggregates at multiple scales.