1 Introduction
Image feature points have been widely used for decades as fundamental components of computer vision algorithms, and they continue to find use in such applications as medical image registration, estimation of homographies in stereo vision, and stitching of images for panoramas or satellite imagery. Features are generally defined as pixels (or sets of pixels) in an image which can be reliably identified and distinctly described across images. The goal of feature detection and description is to abstract images as point sets which can be matched to each other. These correspondences can then be used to compute (estimate) the transformation between the point sets and, thus, between the images.
Feature detectors are scalar functions which measure some sort of response on an image, and features are typically identified as local extrema in these functions. For example, the popular HarrisStephens corner detector identifies feature points as pixels which locally maximize the function
(1) 
where is a parameter and
is the second moment matrix of the image
har1988 . This function responds strongly on pixels where the image varies greatly in two directions, and thus is a corner detector. The popular SIFT feature algorithm uses a multiscale image representation and computes a differenceofGaussians function (an approximation of the LaplacianofGaussian) as a detector low2004 . SIFT features are local maxima of this function over both space and scale. This multiscale representation allows identification of features at different spatial scales, and also serves to smooth noise out of the image.A major drawback to these detectors is that they are only Euclideaninvariant. In other words, these detectors will identify roughly the same image points as feature points in two images only if the two images are related by a Euclidean transformation (rotation, translation, reflection). The Euclidean group can generalized to the affine group by including stretching and skewing transformations (see Figure
1). The equiaffine group, in particular, is useful in that is is a good approximation to projective transformations which are near the identity. Additionally, these transformations do not suffer from known difficulties associated with projective equivalence ast1995 . Several affineinvariant image detectors have been proposed in the literature mik2005a ; mik2005b . In contrast, projective invariance and moving framebased signatures have been successfully applied in vision han2001 ; han2002 .In this work, we propose the use of an invarianttheory based approach to affineinvariant feature detection. The equivariant method of moving frames olv2003 ; olv2015 was inspired by the classical moving frame theory, most closely associated with Élie Cartan — see Gug — but whose history can be traced back to the early nineteenth century, AR . The equivariant method can be readily employed to compute the (differential) invariants of a given transformation (Lie) group. These invariants are then combined into an affineinvariant feature detector function olv1999 . Standard methods for feature description can then be applied to characterize each point. In contrast to the linear smoothing performed during the aforementioned multiscale feature algorithms, we apply a nonlinear, affineinvariant scalespace to smooth our image.
The paper will have a tutorial flavor so as to remain fairly selfcontained. In Section 2 we introduce the fundamental concepts of the method of moving frames and discuss how the method leads to the computation of differential invariants for a given Lie group acting on a manifold. In Section 3 the method is explicitly applied to the special affine group acting on coordinates of functions on (i.e., images), and the fundamental secondorder differential invariants of this action are computed. The differential invariants for the equiaffine group acting on functions in are also computed. In Section 4 these invariants are used to compute a feature detector which is demonstrated to perform well at affine registration of 2D image pairs.
2 Method of Moving Frames
The method of moving frames is a powerful tool used to study the geometric properties of submanifolds under the action of transformation (Lie) groups olv2003 ; olv2015 . In particular, the method allows direct computation of the differential invariants of a transformation group. The equivarant method of moving frames was introduced in fels1999 and permits a systematic application to general transformation groups acting on manifolds. The salient points of this theory are summarized here.
Given an dimensional Lie group, , acting on an dimensional manifold, , the method regards the construction of a moving frame, . A moving frame is defined as a equivariant map . We will exclusively use right equivariant moving frames, which means that for and a local coordinate on , we have
As we will demonstrate, the computation of a moving frame for a Lie group action will allow us to compute the fundamental differential invariants of the group action on the manifold.
A major theorem regarding the existence of such moving frames is as follows olv2003 :
Theorem 1
A moving frame exists near a point if and only if acts freely and regularly near .
The freeness condition is best expressed in terms of isotropy subgroups of . The isotropy subgroup of with respect to is the set
that is, the set of all group elements which fix . The group action is free if
(2) 
where is the group identity. In other words, the action is free if for any , the only group element fixing is the identity. This requires that the orbits of have the same dimension as itself, and hence a necessary (but not sufficient) condition for freeness is that . The group action is regular if the orbits form a regular foliation of .
If the group action is both free and regular, Cartan’s method of normalization is used to compute a moving frame. We begin by choosing a coordinate crosssection to the group orbits. A cross section is a restriction of the form
(3) 
that is, we restrict of the coordinates to be fixed. This crosssection must be transverse to the group orbits. By freeness, for any , there is a group element mapping to this crosssection, i.e., to the unique point in the intersection of the crosssection with the group orbit through . This unique is then the right moving frame for the action. This is formalized in the following theorem, which provides a method for practical construction of the right moving frame for a free and regular group action given a crosssection.
Theorem 2
Let act freely and regularly on , and let be a crosssection. For , let be the unique group element mapping to this crosssection:
(4) 
Then is a right moving frame for the group action.
The goal is to obtain the group transformation parameters in equation (4) by applying the crosssection (3). Explicitly, if we write
(5) 
then the crosssection choice gives us a system of equations
(6) 
The system (6) is solved for the group parameters and the right moving frame is ultimately given as
(7) 
This group element clearly maps each to the crosssection since the group parameters were chosen to do so. Note that of the coordinates of are specified by the crosssection, and so there remain “unnormalized” coordinates, namely . If we substitute the previously computed moving frame parameters into these coordinates, we obtain the fundamental invariants, as stated in the following theorem.
Theorem 3
Let be the moving frame solution to the normalization equations (6). Then
are a complete system of functionally independent invariants, called the fundamental invariants.
In this way, we have computed all of the coordinates of the map as
(8) 
As the name implies, the fundamental invariants are very useful, as evidenced by the following theorem.
Theorem 4
Any invariant can be uniquely expressed as a function
of the fundamental invariants.
Further, given any (scalar) function on our manifold, we can “invariantize” the function by composing it with the moving frame map, as described in the following theorem.
Theorem 5
Given function , the invariantization of with respect to right moving frame is the invariant function defined as
(9) 
In fact, the expression of invariants in terms of the fundamental invariants as in Theorem 4 can be accomplished from the following Replacement Theorem.
Theorem 6
If is any invariant, then
That is, any invariant is easily rewritten in terms of the fundamental invariants, which serves to prove Theorem 4.
A major difficulty arises when we attempt to apply this construction to some common group actions: many group actions are not free, often resulting from the dimension of the manifold being less than the dimension of the group. In these cases, the normalization equations (6) cannot be fully solved for the group parameters. Examples of nonfree group actions include the Euclidean and affine groups on . Theorem 1 does not directly apply for nonfree group actions. However, we may increase the dimension of the manifold by prolonging the group action to the jet spaces, that is, we consider instead the prolonged group action on the derivatives
(10) 
Generally, if the action is prolonged to a high enough order jet space, the action will become (locally) free, olv1995 , and we can proceed with our construction; we will have enough equations to solve system (6). This prolongation can be accomplished by implicit differentiation, as demonstrated in the following section.
3 EquiAffine Invariants for 2D Images
In this section we follow the second author’s note olv2015a to apply the above procedure to construct the 2D equiaffine differential invariants in the context of 2D image transformations. In particular, we are considering a grayscale image as a function , and we seek the differential invariants of this function under equiaffine transformations of the domain coordinates, . This expression in terms of image derivatives is useful since these can be computed (approximated) directly from image data.
The equiaffine group acts on plane coordinates as
(11) 
subject to the constraint
Each transformation in contains five independent parameters, thus is a fivedimensional Lie group. Note that these transformations are always invertible, with inverse
(12) 
We let with coordinates , where the image function depends on and is not affected by the equiaffine group transformations. Since , this group action is not free. Hence, Theorem 1 does not apply, and we must prolong the group action to a higherorder jet space such that the action becomes free. For this example, we need only to prolong to the second jet space, which has dimension . (Although the first order jet space has dimension , the fact that is invariant means that the action is not free at order .) We expect independent differential invariants, one of which is itself.
The prolonged action of the transformation indicates how the coordinate mapping (11) will affect the image function and its derivatives. Clearly, the function
is unchanged by a change of coordinates. Applying the chain rule of multivariate calculus, we can use the transformation (
11) to construct the prolonged derivatives, that is, how the derivatives of the “transformed” function relate to the original derivatives and the transformation parameters. For the coordinate,(13) 
Similarly,
(14) 
More generally, this differentiation can be represented by the implicit differentiation operators
where the represent total differentiation operators with respect to their subscripts. Continuing in this fashion, we can construct the higherorder prolonged derivatives by repeated application of these operators. The secondorder derivatives are
(15) 
The next step in the procedure is to normalize by choosing a crosssection to the group orbits. This amounts to equating several of expressions (11), (13), (14), and (15) to constants. These constants must be chosen such that the resulting system is solvable for the group parameters. Since is fivedimensional, we need to choose five constants. The crosssection we will use is
(16) 
as in olv2015a . This (nonlinear) system is readily solved for the group parameters. Notice that since we normalized and , we obtain
(17) 
respectively. Combining these with the unimodality constraint, and then using expression (17), we find
(18) 
Substituting these relationships into the secondorder equation , we can solve for as
(19) 
With this in hand, we compute as
(20) 
The translation parameters and can be computed easily from the choices as
(21) 
where the values found for the other parameters may be substituted in. Taken together, these group parameters define the right moving frame corresponding to our crosssection .
Having computed this, we are now in a position to compute invariant objects. In particular, given any function of and its derivatives, we can invariantize by transforming it under (11) and then replacing all instances of the group parameters by their moving frame expressions. Some natural functions to transform are those used in the crosssection normalization. For example, let and notice
Plugging in the above expressions,
(22) 
This is obvious, of course, since the expressions for the group parameters were chosen according to the crosssection in which we insisted . The invariantized forms of the functions used in the crosssection normalization are constant, and are known as the phantom invariants.
More interestingly, Theorem 3 guarantees the existence of a set of fundamental secondorder invariants. Consider the second derivatives and which were not used in the normalization. Invariantizing these, we obtain the wellknown second order equiaffine differential invariants:
(23) 
and
(24) 
According to Theorem 4, and form a complete system of secondorder differential invariants. It can be checked directly via chain rule that these are truly equiaffine invariant.
Figure 2 demonstrates and computed on the image pair in Figure 1. These functions are thresholded for clarity, but it can be observed that they are generally unchanged by the transformation. Differences can be attributed to the fact that our derivative computations via finite differences are computed on a Euclidean grid and are thus not affineinvariant themselves.
Remark: Moving frames can be used to design groupinvariant numerical approximations to differential invariants and invariant differential equations, as in cal1996 ; wel2007 . A future project is to develop equiaffine finite difference approximations to the differential invariants, which would better serve to localize invariant feature points.
3.1 The 3D Case
The extension to 3D image volumes has recently been investigated. Fully threedimensional image data has become more popular recently, for example, in the medical field. Here we describe the computation of differential invariants of the equiaffine group acting on threedimensional images.
The equiaffine group acts on space coordinates as
(25) 
subject to the constraint . Clearly, is an 11dimensional Lie group since there are 11 independent parameters. As in the twodimensional case, this group action is certainly not free, and we must prolong the group action to a higherorder jet space such that the action becomes free. Unfortunately, when we prolong to the second jet space, , the group action is still not free, even though the dimensions satisfy
Through our construction we will produce 3 functionally independent secondorder differential invariants, and this proves that the (generic) orbits are 10dimensional, which precludes freeness of the second order prolonged action.
We thus need to prolong the group action to the thirdorder jet space in order to find a usable crosssection. The crosssection we will employ is
(26) 
Notice that the variables and are not involved in the normalization, and so the invariantization of these quantities will give our desired fundamental secondorder invariants.
The next step in the procedure would be to solve the system (26). In this case, this is a nonlinear system of 11 equations, and computing the closedform solution is not as straightforward as it was in the 2D case. Instead, we will extend the 2D differential invariants and to the 3D case and then apply the Replacement Theorem 6. Recall that this theorem allows us to express any invariant in terms of the moving frame invariants. From the crosssection (26), we see
(27) 
and
(28) 
where and , not used in the normalization, are the fundamental invariants we wish to determine.
Notice that the invariants and found in the 2D case can be written in more general forms:
(29) 
and
(30) 
If we consider now extending these to the 3D case, where , the invariantization results (27) and (28) give
(31) 
That is, we can compute the fundamental invariants and for the 3D case directly as a generalization of the 2D fundamental invariants and , respectively. Hence, the fundamental invariants are now expressed as
(32) 
and
(33) 
which can be computed directly from the image.
In Figure 3, we show these invariants for a typical 3D image. Several slices of an image volume are shown, along with the differential invariants computed on these slices. The exhibit similar characteristics as in the 2D case, being clustered about edges and corners, and are equiaffine invariant.
Notice that we do not need to compute any thirdorder differential invariants. However, these can be generated from the secondorder differential invariants by invariant differentiation, but are omitted for space since the computations prove somewhat lengthy.
4 Application: 2D Feature Detection with ScaleSpace
In this section, the differential invariants and are combined into an affineinvariant function which will serve as a feature detector. To make detection more robust to noise, we construct a scalespace of our images. Unlike traditional methods such as SIFT low2004 , our scalespace is nonlinear, affineinvariant, and based on a geometric curve evolution equation.
4.1 AffineInvariant Gradient Magnitude
The affine differential invariants computed above were used in olv1999 to construct an affineinvariant analog to the traditional Euclidean gradient magnitude, namely
(34) 
This function is slightly modified to avoid zero divisors:
(35) 
An example of this function applied to the image pair in Figure 1 is shown in Figure 4. This function was previously applied as an edge detector for an affineinvariant active contour model. Indeed, the function can be seen to give large response along object boundaries.
Some example feature points identified as local maxima of the function (35) are shown in Figures 6, 7, and 8. As expected, these points largely cluster around object boundaries. This poses a difficulty, since edge points are not distinct and tend to foil matching algorithms. A further modification may be made such that, for example, candidate feature points are only retained as true feature points if image variation is significant in multiple directions.
4.2 AffineInvariant ScaleSpace
An important preprocessing step for feature detection is the construction of the Gaussian (or linear) scalespace of the image intensity signal by convolution of the image with Gaussian kernels of increasing standard deviation or, equivalently, evolution of the image via the linear diffusion equation
(36) 
with initial condition , where is the original image, is the usual Laplacian operator, and is an artificial time parameter. The solution to this equation, , for and is the Gaussian scalespace of the image wit1984 . This is a multiscale representation of the image, a oneparameter family of images with the characteristic that as is increased, noise and highfrequency features are removed and we obtain successively smoothed versions of the original image data rom2013 . Feature points can then be extracted from this scalespace image representation by finding local detector maxima in both space and scale low2004 . By searching across all scalespace, feature points can be identified at a variety of scales. A notable drawback of this linear approach is that important image features such as edges and corners are blurred by the smoothing, and so feature localization is lost.
Nonlinear scalespaces have also been investigated for feature detection. The popular KAZE algorithm alc2012 relies on the PeronaMalik anisotropic diffusion framework in which an image scalespace is constructed as the solution to the nonlinear diffusion equation
(37) 
where is a conductance function which serves to slow diffusion near edges in an image per1990 . This image representation preserves edges better than its linear counterpart (36), which is obtained from (37) when .
For a fully affineinvariant multiscale feature detection pipeline, we require both an affineinvariant scalespace image representation and an affineinvariant feature point detector. A fully affineinvariant scalespace for plane curves was developed in sap1993 . In this work, a closed plane curve
evolves according to the partial differential equation (PDE)
(38) 
where is an artificial time parameter, is the Euclidean curvature of , and
is the inward unit normal vector to
. The solution to this equation is a oneparameter family of plane curves, , which evolve in an affineinvariant manner to smooth out the curve. By evolving the PDE for larger amount of time, we obtain successively smoothed versions of the curve.This nonlinear curve smoothing equation gives rise to an affineinvariant scalespace for images through an application of the level set method for interface propagation set1999 . Given image , the level contours of , namely the sets
(39) 
for each , are made to evolve simultaneously according to PDE (38). Notice that differentiating with respect to along the level contour we obtain
(40) 
where subscripts indicate partial derivatives. If we let this curve evolve according to PDE (38),
(41) 
where we used the fact that the gradient of a function is normal to the level curves of that function. Combining equations (40) and (41), we obtain
(42) 
Lastly, we wish to write PDE (42) entirely in terms of so that it may be applied directly to images without explicitly considering the level sets of the image. Hence, we require an expression for , the Euclidean curvature of the level contour, in terms of the image function . This is well known, and can be shown to be
(43) 
Substitution of this into PDE (42) gives
(44) 
which is the 2D affineinvariant geometric heat equation. The affineinvariant scale space of 2D images is constructed by solving the PDE (44) with initial data , the original image. As with the curve evolution equation (38), we obtain smoothed versions of the image as we evolve in time. An example of this image smoothing is shown and compared with the linear case in Figure 5. This evolution is truly affineinvariant, in the sense that for two initial images and related by an equiaffine transformation, the scalespace images at any time , namely and , are related by the same equiaffine transformation.
In our feature detection pipeline, the affineinvariant scalespace of an image is first computed by solving PDE (44) via finite differences. As in traditional linear methods, we sample the scalespace at six or eight discrete times, and we search for candidate feature points over each sample image. In this way, we capture details at several different scales and levels of smoothness.
These invariant flows are examples of a more general framework, developed in olv1997 ; olv1994 , for constructing invariant curve and surface flows under general Lie groups. Of future interest, the curve flow PDE (42) was extended in olv1997 to an affineinvariant surface evolution equation
(45) 
where is the surface in 3D and is the (nonnegative) mean curvature of . Similar to above, this equation generates an affineinvariant scalespace for 3D images and provides invariant smoothing with edge preservation.
4.3 Examples
Once feature points are detected in both images, SURF descriptors are computed for each feature point bay2006 . These descriptors represent smoothed versions of the local Hessian matrix about the pixel of interest, which are known to represent local shape. These vectors are not affineinvariant, and so we apply an affine region normalization procedure prior to descriptor computation bau2000 . Points are matched between images by comparing their descriptor vectors. Different metrics can be used to assess the similarity between features. For example, the traditional Euclidean difference between the feature vectors is a simple choice, which we use here.
Once matches (correspondences) are found between the feature points, we may now align the point sets. Traditionally, an affine transformation can be fit to the correspondences by the leastsquares method. However, this method considers every correspondence as equal. In reality, feature algorithms often return erroneous matches, and so we need a fitting algorithm that is robust to mismatches. One such robust estimation method is RANSAC for model fitting fis1981 ; col2006
. In this algorithm, candidate affine transformation models are generated by selecting a minimal set of correspondences and computing the transformation based on only these points. Then, the number of correspondences that agree (within some threshold) with this model are counted and are called inliers. The entire process is repeated many times with different randomlyselected sets of correspondences. Ultimately, the generated transformation with the largest number of inliers is selected as the ideal model. In this manner, outliers are filtered out, since models generated using outliers will not have a large number of inliers.
Example alignments of affine transform image pairs are shown in Figures 6, 7, and 8. In Figure 6, the computed transformation does a very good job of aligning the images; any differences between the overlaid images are difficult to notice. This image has a lot of variety, and so we expect to do well here.
The computed affine transform in Figure 7 exhibits errors. This is likely due to the affineinvariant gradient detector exhibiting strong response along object edges. Edge points are difficult to match since they often only vary in one direction, and so could be believably matched to any other pixel along the same edge. The transform shown in Figure 8 suffers from the same limitations.
Another limitation of the method is that the differential invariant computations are only approximations via finite differences using traditional rectangular grids. As such, they are imperfect, and even a close inspection of Figure 2 will reveal that the responses for the two images are not exactly equal up to affine transformation. This results in inaccurate localization of feature points, and compounds the aforementioned issue with the edge points. A further investigation might pursue more accurate numerical derivative approximations, but these would slow the algorithm.
Despite these limitations, the algorithm performs well for using no prior information. The method might be used as a rough initial registration, after which a more sophisticated deformable registration might be applied. Registration could be improved by incorporating other detectors (for example Harris corners) which, though perhaps not affineinvariant, may return points which could be reliably matched.
5 Future Directions
We have computed the fundamental equiaffine differential invariants for 3D image volumes. A future investigation will focus on the application of these invariants to an analogous invariant feature point detection and registration pipeline.
Moving frames have been used to design groupinvariant numerical approximations, as in cal1996 ; wel2007 . These methods may be applied to develop equiaffine finite difference approximations to the differential invariants, which would better serve to localize invariant feature points.
The most interesting invariants for computer vision applications are for the 3Dto2D projective group, since these transformations model how the real world is projected onto the image plane of a camera. Unfortunately, the projective group invariants are numerically difficult in that they involve derivatives of orders higher than two, and this is prohibitive for applications in which speed is a priority. The computation of projective invariants is being investigated. Related works connect the differential invariants of 3D curves and their 2D projections through the method of moving frames bur2013 ; kog2015 .
6 Conclusion
In this paper, we have shown how the equivariant method of moving frames is used to compute the fundamental secondorder differential invariants of the (special) affine group acting on scalar functions on . These invariants were used to construct an affineinvariant feature point detector function, which was demonstrated to perform well at alignment of affinerelated image pairs. The differential invariants for the equiaffine group acting on 3D image volumes are also computed, and the extension of the 2D pipeline to 3D image volumes (e.g., MRI) is an interesting future direction.
References
 (1) C. Harris and M. Stephens, “A combined corner and edge detector.” in Alvey vision conference, vol. 15, no. 50. Manchester, UK, 1988, pp. 10–5244.
 (2) D. G. Lowe, “Distinctive image features from scaleinvariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
 (3) K. Astrom, “Fundamental limitations on projective invariants of planar curves,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no. 1, pp. 77–81, 1995.
 (4) K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Z. J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool, “A comparison of affine region detectors,” International journal of computer vision, vol. 65, no. 12, pp. 43–72, 2005.
 (5) K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615–1630, 2005.
 (6) C. E. Hann, “Recognising two planar objects under a projective transformation,” Ph.D. dissertation, University of Canterbury, Mathematics and Statistics, 2001.
 (7) C. E. Hann and M. Hickman, “Projective curvature and integral invariants,” Acta Applicandae Mathematica, vol. 74, no. 2, pp. 177–193, 2002.
 (8) P. J. Olver, “Moving frames,” Journal of Symbolic Computation, vol. 36, no. 3, pp. 501–512, 2003.
 (9) ——, “Modern developments in the theory and applications of moving frames,” London Mathematical Society Impact150 Stories, no. 1, pp. 14–50, 2015.
 (10) H. Guggenheimer, Differential Geometry. McGraw–Hill, New York, 1963.
 (11) M. Akivis and B. Rosenfeld, Élie Cartan (18691951). Translations Math. Monographs, vol. 123, American Math. Soc., Providence, R.I., 1993.
 (12) P. J. Olver, G. Sapiro, and A. Tannenbaum, “Affine invariant detection: edge maps, anisotropic diffusion, and active contours,” Acta Applicandae Mathematicae, vol. 59, no. 1, pp. 45–77, 1999.
 (13) M. Fels and P. J. Olver, “Moving coframes: II. regularization and theoretical foundations,” Acta Applicandae Mathematica, vol. 55, no. 2, pp. 127–208, 1999.
 (14) P. J. Olver, Equivalence, Invariants, and Symmetry. Cambridge University Press, 1995.
 (15) ——, “Moving frame derivation of the fundamental equiaffine differential invariants for level set functions,” preprint, University of Minnesota, 2015.
 (16) E. Calabi, P. J. Olver, and A. Tannenbaum, “Affine geometry, curve flows, and invariant numerical approximations,” Advances in Mathematics, vol. 124, no. 1, pp. 154–196, 1996.
 (17) M. Welk, P. Kim, and P. J. Olver, “Numerical invariantization for morphological pde schemes,” in International Conference on Scale Space and Variational Methods in Computer Vision. Springer, 2007, pp. 508–519.
 (18) A. Witkin, “Scalespace filtering: A new approach to multiscale description,” in Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP’84., vol. 9. IEEE, 1984, pp. 150–153.
 (19) B. M. H. Romeny, Geometrydriven diffusion in computer vision. Springer Science & Business Media, 2013, vol. 1.
 (20) P. F. Alcantarilla, A. Bartoli, and A. J. Davison, “Kaze features,” in European Conference on Computer Vision. Springer, 2012, pp. 214–227.
 (21) P. Perona and J. Malik, “Scalespace and edge detection using anisotropic diffusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629–639, 1990.
 (22) G. Sapiro and A. Tannenbaum, “Affine invariant scalespace,” International Journal of Computer Vision, vol. 11, no. 1, pp. 25–44, 1993.
 (23) J. A. Sethian, Level Set Methods and Fast Marching Methods: Evolving Interfaces in Computational Geometry, Fluid Mechanics, Computer Vision, and Materials Science. Cambridge University Press, 1999, vol. 3.
 (24) P. J. Olver, G. Sapiro, and A. Tannenbaum, “Invariant geometric evolutions of surfaces and volumetric smoothing,” Siam. J. Appl. Math., vol. 57, no. 1, pp. 176–194, 1997.
 (25) ——, “Differential invariant signatures and flows in computer vision: A symmetry group approach,” in GeometryDriven Diffusion in Computer Vision. Springer, 1994, pp. 255–306.
 (26) H. Bay, T. Tuytelaars, and L. V. Gool, “Surf: Speeded up robust features,” ECCV 2006, pp. 404–417, 2006.

(27)
A. Baumberg, “Reliable feature matching across widely separated views,” in
Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on
, vol. 1. IEEE, 2000, pp. 774–781.  (28) M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
 (29) T. Colleu, J.K. Shen, B. Matuszewski, L.K. Shark, and C. Cariou, “Featurebased deformable image registration with ransac based search correspondence,” in AECRIS’06Atlantic Europe Conference on Remote Imaging and Spectroscopy, 2006, pp. 57–64.
 (30) J. M. Burdis, I. A. Kogan, and H. Hong, “Objectimage correspondence for algebraic curves under projections,” SIGMA: Symmetry Integrability Geom. Methods Appl., vol. 9, p. 023, 2013.
 (31) I. A. Kogan and P. J. Olver, “Invariants of objects and their images under surjective maps,” Lobachevskii Journal of Mathematics, vol. 36, no. 3, pp. 260–285, 2015.
Comments
There are no comments yet.