1 Introduction
The SIFT descriptor, introduced by Lowe in [21]
, is a widely used descriptor in image processing and machine learning. The descriptor and its variants have been thoroughly studied and have been shown to systematically outperform other descriptors
[24]. Many extensions have been proposed, some of which include sampling on a logpolar grid [24, 28], reducing the dimension with PCA [20], and scale pooling by averaging descriptors sampled from a neighborhood around the detected scale [11]. However, little known work has been performed to study and enhance the descriptor thresholding that is presented as part of the method. This thresholding [27], also called clamping, was introduced by Lowe as an adhoc way to achieve robustness to nonlinear illumination effects, such as sensor saturation. This would lead us to believe the clamping process would improve matching performance on image pairs that exhibit significant illumination changes; but have little effect on images with similar lighting conditions. However, Lowe’s clamping method can greatly increase matching performance (14.4% improvement on the Oxford dataset) on general image pairs even when no significant illumination changes exist.This work proposes a novel method, which we call meaningful clamping (MC), to automatically threshold SIFT descriptors and improves on the idea of clamping by providing a rigorous process to compute the clamping threshold. This leads to significantly increased performance over the existing clamping method on a wide variety of image matching problems. The method is based on a contrario methodology for computing detection thresholds [10], and is introduced in Sec. 3. Matching results with experiments performed on the Oxford dataset [25] are shown in Sec. 5, and confirm stateoftheart results.
2 The SIFT Descriptor
The image matching problem can be separated into two parts: feature detection and feature description. The goal of a feature detector is to produce a set of stable feature frames that can be detected reliably across corresponding image pairs. Examples of methods that detect similarity feature frames include SIFT, SURF [2], SFOP [16], HarrisLaplace [23], and HessianLaplace [23]. Other methods have been developed to detect affine feature frames such as MSER [22], LLD [4], HarrisAffine [23], and HessianAffine [23]. For any given detected feature, its frame determines how to sample a normalized patch , for which we build a descriptor . The goal of the descriptor is to distinctly represent the image content of the normalized patch in a compact way.
We propose to create an extension of the SIFT descriptor, since it has been shown to systematically outperform other descriptors [24]. The SIFT descriptor is a smoothed and weighted 3D histogram of gradient orientations. For any patch
, we form a gradient vector field
. We define the grid , which determines the bin centers of the histogram and has size . In typical implementations, is chosen to have spatial bins and angular bins. With and , a single, prenormalized spatial bin of the SIFT descriptor can be written as the integral expression:(1) 
where [11, 31]. The weight function
is a bilinear interpolation with
and
is an angular interpolation [27]. The parameter is the radius of such that the patch has dimensions . The histogram samples are also weighted by a Gaussian density function , the purpose of which is to discount the contribution of samples at the edge of the patch with the goal to reduce boundary effects. The building of SIFT descriptors using Eq. 1 for all experiments was performed with the VLFeat open source vision library [31]^{1}^{1}1
The VLFeat library estimates Eq.
1 by sampling a discrete grid.. For further details on how the descriptor was constructed, the reader is encouraged to review [27, 31].2.1 Clamping
In an effort to design a descriptor to be robust to nonlinear contrast changes, Lowe proposed to threshold the bin magnitudes of the descriptor. Lowe defines this threshold as
(2) 
with the parameter set experimentally, and this is the default setting in [31]^{2}^{2}2The 0.2 clamping threshold is ’hard coded’ into [31].. This is followed by an additional normalization to ensure unit length of the descriptor is preserved after thresholding. It is important to note that the thresholding in Eq. 2 maintains invariance to affine contrast changes. The thresholding process, or clamping, is thought to reduce the effects of camera saturation or other illumination effects. However, we will show empirically in Sec. 5.3 that clamping also increases the general matching performance of the descriptor, observed to be 14.4% compared to the performance without clamping on the Oxford dataset. This occurs even when there exists consistent lighting conditions between image pairs. The threshold parameter of is set rather arbitrarily; and is fixed for every descriptor. By applying an automatic threshold that is allowed to vary for every descriptor, we can significantly improve the performance of the SIFT descriptor for image matching problems.
3 Meaningful Clamping
The bins of the SIFT descriptor represent the underlying content of a local image patch. We wish to detect when geometric structure is present in the patch; and this is indicated by the observation of large descriptor bin values. This amounts to detecting significant bins by computing a perception threshold for each descriptor and using that as the clamping limit. The idea is that once bins reach the perception threshold, little information is gained by exceeding this value. A contrario methodology is proposed to compute descriptor perception thresholds, and is based on applying a mathematical foundation to the concept of the Helmholtz principal, which states “we immediately perceive whatever could not happen by chance” [10]. It has been shown to be highly successful for many problems in image processing such as detecting line segments [18], change detection [14], contrasted boundaries [8], vanishing points [1], and modes of histograms [6, 15].
Instead of trying to define a priori the structure of the underlying image content, an impossible task for general natural images, we instead define what it means to have a lack of structure
. Using the Helmholtz principal, lack of structure is simply modeled as uniform randomness, which we call the uniform background model, or the null hypothesis
. We assume the descriptor has been generated from , and claim a detection, i.e. significant geometric content is present, when there is a large deviation from . If the observed event is extremely unlikely to have been generated from this background model, we claim the event as meaningful because it could not have occurred by random chance.Let be the histogram grid associated with the descriptor , which represents a set of connected bins such that every bin contains a number of sample counts , and a neighborhood of bins for which is connected. Introducing a neighborhood set for each bin allows us to have circular connected angular histograms, while spatial dimensions are rectangular. We also let be the total number of samples of the descriptor and
be the probability that a random sample is drawn in bin
, which leads to the definition of the null hypothesis for the descriptor .Definition 3.1.
Let be a SIFT descriptor built on the grid . The descriptor is said to be drawn from the null hypothesis,
, if every sample is independent, identically, and uniformly distributed with
for every bin .It follows that the probability at least samples are in bin under the null hypothesis, with , is given by the binomial tail
(3) 
When this probability becomes small, is unlikely to have occurred under the uniform background model, we then reject the null hypothesis and conclude the bin must be meaningful. This results in detecting meaningful bins by thresholding the probability in Eq. 3. Given the assumption that the data was drawn from the uniform background model, we can compute for any bin the expected number of false detections, denoted as NFA for the number of false alarms, as
(4) 
where is the number of tests, and is typically defined as the number of all possible connected subsets of the histogram. can be seen as a Bonferroni correction [17, 19] for the expected value in Eq. 4. Which leads to the following definition of a meaningful bin.
Definition 3.2.
A bin of the SIFT descriptor is an meaningful bin if
This leads to the question of what to use for ? We can follow the work of Desolneux, et al. [7], and always set , since including the number of tests, , allows the threshold to scale automatically with histogram size. The setting of can be interpreted as setting the threshold so as to limit the expected number of false detections under a uniform background model to less than one. This has two important consequences. First, for some applications, it is important for the algorithm to correctly give zero detections when no object exists. Second, this strategy gives detection thresholds that are similar to that of human perception [9]; and the dependence on is logarithmic and hence very weak [18]. We will hereafter refer to an meaningful bin as just a meaningful bin.
We can now select a clamping threshold for as the minimum descriptor bin value needed to be detected as a meaningful bin. For a given descriptor , with corresponding properties and , we define this threshold as
(5) 
We then proceed to create the new clamped descriptor as
(6) 
for every bin .
4 Implementation Details
The a contrario threshold in Eq. 5 has dependence on , which is defined as the number of all possible connected subsets of . However, for any histogram greater than dimension one, we cannot explicitly compute this, and instead use the number of aligned rectangular regions
(7) 
represents a (loose) lower bound of . There could also be concern with respect to computing the inverse binomial tail in Eq. 5. While efficient computational libraries exist to directly calculate the detection threshold^{3}^{3}3
For example the quantile function in the binomial library of Boost.
, this still requires an iterative method since no closed form solution exits. This may be undesirable for certain realtime applications. We can instead create an approximation to Eq. 5 by applying the bound(8) 
with and [10]. The bound in Eq. 8 is valid when either (a) and , or (b) [30]. As grows large, the term becomes small^{4}^{4}4For any typical implementation of SIFT, the term is negligible and the bound is valid. and Eq. 8 converges to the central limit approximation. Using this we can solve for the detection threshold as
(9) 
with . From this we can compute a new clamped descriptor, , with Eq. 6 using the bin threshold of Eq. 9. Using the approximation still maintains the property from Eq. 6 that .
Proposition 4.1.
Proof.
The significance of Proposition 4.1 is that we can safely use Eq. 9 and ensure the descriptor is appropriately clamped without having to determine the true number of tests, , or iterate to find the inverse of the binomial tail. Conditions (a), (b), and the requirement that is sufficiently large in Eq. 8 are very weak since for any practical implementation of the SIFT descriptor, these conditions are met.
5 Results
We present image matching results applying the newly developed meaningful clamping method, and compare it to the clamping procedure proposed by Lowe. For reference, we also compare both clamping methods to descriptors with which no clamping was performed.
5.1 Dataset
To evaluate matching performance, we use the Oxford dataset [25], which is comprised of 40 image pairs of various scene types undergoing different camera poses and transformations. These include viewpoint angle, zoom, rotation, blurring, compression, and illumination. The set contains eight categories, each of which consists of image pairs undergoing increasing magnitudes of transformations. Included with each image pair is a homography matrix, which represents the ground truth mapping of points between the images. The transformations applied to the images are real and not synthesized as in [13]. The viewpoint and zoom+rotation categories are generated by focal length adjustments and physical movement of the camera. Blur is generated by varying the focus of the camera; and illumination by varying the aperture. The compression set was created by applying JPEG compression and adjusting the image quality parameter.
5.2 Metrics
To evaluate the performance of local descriptors with respect to image matching, we follow the methods of [24]. Given a pair of images we extract SIFT features from both images. A match between two descriptors is determined when the Euclidean distance is less than some threshold . Any descriptor match is considered a correct match if the two detected features correspond as defined in [25]. Using the ground truth homography mapping supplied with the dataset, features are considered to correspond when the area of intersection over union is greater than 50% to be consistent with [24]. For some value of we can compute recall as
as well as 1precision
The pair represents a point in space; and by varying we can create curves that demonstrate the matching performance of the descriptor. This is called the precision recall curve; and we follow the method of [12] to compute the area under the curve, producing a value called average precision (AP)^{5}^{5}5We use 100 points to sample the precision recall curves as opposed to 11 proposed in [12]. This gives a higher resolution estimate of the AP.. Larger AP indicates superior matching performance. The average of APs, across individual categories or the entire dataset, provides the mean average precision (mAP) used to compare clamping methods.
5.3 Evaluation
Category  No Clamping  Lowe Clamping  MC 

Graffiti  0.123  0.161  0.205 
Wall  0.327  0.371  0.405 
Boats  0.301  0.341  0.375 
Bark  0.111  0.119  0.120 
Trees  0.207  0.288  0.366 
Bikes  0.414  0.371  0.496 
Leuven  0.387  0.538  0.635 
UBC  0.558  0.588  0.615 
All images  0.303  0.347  0.402 
We compute the AP for every image pair in the Oxford dataset, each for two different parameter settings of the SIFT detector. This parameter is called FirstOctave, and we test for both 0 and 1. Setting FirstOctave to 1 upsamples the image before creating the scale space, generating a great deal more features than with 0, resulting in more total matches, but with lower overall AP. It is important to test for this setting because it allows for greater scale variations between images, and is the default setting for SIFT in the Covariant Features toolbox in the popular VLFeat opensource library [31]. It also shows how clamping impacts performance in large sets of SIFT points, and indicates how well the method scales with large amounts of data. For certain image pairs, the distortion between images is great enough, that little or no feature correspondences exist. Under these circumstances, no matches are found, and we cannot compute the precision recall curves. We define the AP to be zero in this case.
Table 1 compares the mAP for each category in the Oxford dataset when the SIFT FirstOctave is set to 0. MC systematically outperforms Lowe clamping for every image transform type. It also shows that clamping can improve matching performance in general image pairs, not just in cases of significant illumination differences. The leuven category of lighting shows an impressive 18.2% improvement, but does not exhibit the greatest gain, which occurred in bikes (blur) at 33.6%. The method shows remarkable performance on blurred images, with trees improving 27.0%. The bark (zoom+rotation) had the least improvement at 1.4%. However, it should be noted that it could be an artifact of the SIFT detector which extracted few correct correspondences for this category. Boats, which also varied zoom+rotation, had a 9.9% increase. The mean AP for all image pairs of the Oxford dataset improved by 15.9% compared to Lowe clamping. Fig. 1 shows a direct comparison between clamping methods, with each point representing the AP of an image pair.
Category  No Clamping  Lowe Clamping  MC 

Graffiti  0.016  0.035  0.110 
Wall  0.230  0.270  0.320 
Boats  0.054  0.118  0.244 
Bark  0.049  0.063  0.068 
Trees  0.043  0.096  0.173 
Bikes  0.141  0.112  0.185 
Leuven  0.115  0.210  0.365 
UBC  0.215  0.305  0.411 
All images  0.108  0.152  0.234 
For large scale experiments with the FirstOctave parameter set to 1, the performance jumps dramatically, and shows that the improvement in matching increases as the number of points increases. The category exhibiting the most improvement was graffiti (viewpoint) with a remarkable 215.2% increase. Again, bark had the least improvement with 7.9%. Even with the FirstOctave parameter set to 1, the SIFT detector performed poorly on the bark category and generated few correspondences, influencing the matching results as before. As a reference, boats increased by 106.9%. The mean AP increased by 54.0% for all image pairs in the dataset.
It is important to note that while SIFT is used as the detector for this experiment, other detectors may be used and obtain similar results. However, much like the SIFT detector, there exist other fundamental parameters that may greatly influence the number of total points generated. Experiments point to the number of detected points generated as the single largest factor relating the amount of improvement over Lowe clamping. The remarkable property observed in the experiments listed above is that with a larger amount of detected points to match, the percentage improvement in AP increases. Also of interest, is that clamping can augment other recent advances in image descriptor construction, for example DSPSIFT [11].
6 Conclusions and Future Work
A new method to threshold SIFT descriptors was presented. This method significantly improves mAP for image matching on the standard Oxford dataset. Future work is to study the impact meaningful clamping has on other problems, such as large scale image retrieval. Also of interest is the study of
why meaningful clamping (and also clamping in general) has such a large impact on image matching.The author conjectures that clamping effects the distribution of large point sets of descriptors. If the descriptor is not clamped, then a small number of descriptor bins would dominate all other bins. This would constrain the points to lie mostly along the axes. Performing nearest neighbortype searches could become ambiguous, since many points would exist with a similar spatial distance. By clamping, we are thresholding bin magnitudes; and this causes the points to ’spread out’, and more uniformly occupy the space in which the descriptors lie. This conjecture is supported by the observation in the presented experiments that the improvement drastically increased when attempting to match larger sets of points extracted from the image pairs when the SIFT detector parameter FirstOctave was changed to 1, generating many more features for each image.
Acknowledgments
The author would like to thank UCLA professor Stefano Soatto for many great discussions on the topic of clamping SIFT descriptors, and his student Jingming Dong for direction on descriptor evaluation. The author would also like to thank Lawrence Peterson, Shawnasie Kirchner, and the anonymous reviewers for spending valuable time to provide a careful and thorough review of this paper.
References
 [1] A. Almansa, A. Desolneux, and S. Vamech. Vanishing point detection without any a priori information. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(4):502–507, 2003.

[2]
H. Bay, T. Tuytelaars, and L. Van Gool.
SURF: Speeded up robust features.
In
European Conference on Computer Vision (ECCV)
, pages 404–417. Springer, 2006. 
[3]
C. Berger, T. Géraud, R. Levillain, N. Widynski, A. Baillard, and
E. Bertin.
Effective component tree computation with application to pattern recognition in astronomical imaging.
In IEEE International Conference on Image Processing (ICIP), 2007.  [4] F. Cao, J.L. Lisani, J.M. Morel, P. Musé, and F. Sur. A Theory of Shape Identification, volume 1948. Springer, 2008.
 [5] J. Delon, A. Desolneux, J.L. Lisani, and A. B. Petro. Histogram analysis and its applications to fast camera stabilization. In International Workshop on Systems, Signals and Image Processing, pages 431–434, 2004.
 [6] J. Delon, A. Desolneux, J.L. Lisani, and A. B. Petro. A nonparametric approach for histogram segmentation. IEEE Transactions on Image Processing, 16(1):253–261, 2007.
 [7] A. Desolneux, L. Moisan, and J.M. Morel. Meaningful alignments. International Journal of Computer Vision, 40(1):7–23, 2000.
 [8] A. Desolneux, L. Moisan, and J.M. Morel. Edge detection by Helmholtz principle. Journal of Mathematical Imaging and Vision, 14(3):271–284, 2001.
 [9] A. Desolneux, L. Moisan, and J.M. Morel. Computational gestalts and perception thresholds. Journal of PhysiologyParis, 97(2):311–324, 2003.
 [10] A. Desolneux, L. Moisan, and J.M. Morel. From Gestalt Theory to Image Analysis: A Probabilistic Approach, volume 34. Springer, 2007.
 [11] J. Dong and S. Soatto. Domain size pooling in local descriptors: DSPSIFT. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 2015.
 [12] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman. The PASCAL visual object classes (VOC) challenge. International Journal of Computer Vision, 88(2):303–338, 2010.
 [13] P. Fischer, A. Dosovitskiy, and T. Brox. Descriptor matching with convolutional neural networks: A comparison to SIFT. arXiv preprint arXiv:1405.5769, 2014.

[14]
A. Flenner and G. Hewer.
A Helmholtz principle approach to parameter free change detection and coherent motion using exchangeable random variables.
SIAM Journal on Imaging Sciences, 4(1):243–276, 2011.  [15] A. Flenner, G. Hewer, and C. Kenney. Two dimensional histogram analysis using the Helmholtz principle. Inverse Problems and Imaging, 2(4):485–525, 2008.
 [16] W. Forstner, T. Dickscheid, and F. Schindler. Detecting interpretable and accurate scaleinvariant keypoints. In IEEE International Conference on Computer Vision (ICCV), pages 2256–2263. IEEE, 2009.
 [17] R. G. Von Gioi and J. Jakubowicz. On computational Gestalt detection thresholds. Journal of PhysiologyParis, 103(1):4–17, 2009.
 [18] R. G. Von Gioi, J. Jakubowicz, J.M. Morel, and G. Randall. LSD: A fast line segment detector with a false detection control. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(4):722–732, 2010.
 [19] Y. Hochberg and A. C. Tamhane. Multiple Comparison Procedures. John Wiley & Sons, New York, 1987.
 [20] Y. Ke and R. Sukthankar. PCASIFT: A more distinctive representation for local image descriptors. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages II:506–513. IEEE, 2004.
 [21] D. G. Lowe. Distinctive image features from scaleinvariant keypoints. International Journal of Computer Vision, 60(2):91–110, 2004.
 [22] J. Matas, O. Chum, M. Urban, and T. Pajdlaás. Robust widebaseline stereo from maximally stable extremal regions. Image and Vision Computing, 22(10):761–767, 2004.
 [23] K. Mikolajczyk and C. Schmid. Scale & affine invariant interest point detectors. International Journal of Computer Vision, 60(1):63–86, 2004.
 [24] K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10):1615–1630, 2005.
 [25] K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. Van Gool. A comparison of affine region detectors. International Journal of Computer Vision, 65(12):43–72, 2005.
 [26] L. Najman and M. Couprie. Building the component tree in quasilinear time. IEEE Transactions on Image Processing, 15(11):3531–3539, 2006.
 [27] I. R. Otero and M. Delbracio. Anatomy of the SIFT method. Image Processing On Line, 4:370–396, 2014.
 [28] J. Rabin. Approches robustes pour la comparaison d’images et la reconnaissance d’objets. PhD thesis, Télécom ParisTech, 2009.
 [29] J. Rabin, J. Delon, and Y. Gousseau. A statistical approach to the matching of local features. SIAM Journal on Imaging Sciences, 2(3):931–958, 2009.
 [30] E. V. Slud. Distribution inequalities for the binomial law. The Annals of Probability, pages 404–412, 1977.
 [31] A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms. In Proceedings of the International Conference on Multimedia, pages 1469–1472. ACM, 2010.
Comments
There are no comments yet.