The computation of informative tangent vector quantities on surfaces is a widely studied topic. The most standard vector quantities one can consider are the principal directions that can be estimated via differential geometry tools. However these do not necessarily serve all Computer Graphics purposes: in case of umbilical surface parts, or at monkey saddles, this vector field becomes locally irrelevant. Further, it only accounts for anedge like structure of the shape which is limited. Our approach considers per point vector quantity estimation, adopting a differential analysis point of view. We strive to go beyond the differential order 2 usually considered when analysing surfaces and extend the definition of principal directions to higher orders. We show experimentally why this definition, beyond its theoretical interest is appealing for surface analysis.
To summarize, our contributions are as follows:
The mathematical definition of arbitrary order principal directions and their link with symmetric tensor eigenvalues.
A theoretical analysis of their properties.
A practical way of computing the directions on a sampled surface.
2 Related Work
In this section we review recent works on tangent vector quantities that can be set or estimated on a meshed or sampled surface be they guided by differential properties or designed by a global optimization process with user-prescribed constraints.
Differential quantities estimation
Estimating differential quantities on surfaces has been at the core of Geometry Processing Research. However, surface analysis restricts very often to order 1 and 2 differential properties and has seldom tackled higher order properties. Among order 2 quantities, the most famous one may be the Laplace-Beltrami operator, whose design has gathered a lot of works both from a theoretical analysis (e.g. [Meyer02, Wardetzky07]) and practical analysis through the Manifold Harmonics Basis (e.g. [Vallet08]). Related to the Laplace-Beltrami operator are the principal curvatures and curvature directions (or equivalently the curvature tensor) estimation, either on a point set surface [Kalogerakis07, Kalogerakis09] or on meshes [Cohen-Steiner03], with applications to curvature lines tracking among others.
It is however possible to get access to high order derivatives of the local surface using local regression, in the context of Moving Least Squares [Levin98]. Among those methods, Osculating Jets [Cazals05] express the surface locally as a truncated Taylor expansion wrt to a local planar parametrization. The coefficients can be estimated through a linear system solve and give then a direct access to high order cross derivatives. Interestingly, this approach proved that the error on order differential quantities estimation in a neighborhood of radius using a Taylor expansion of order was in . In other words, and quite counter-intuitively, to increase the accuracy of an order 2 estimation, one should still consider a large Taylor expansion order. Several other basis have been proposed following this trend, including the Wavejets basis [Bearzi18]
which is less sensitive to the choice of the local reference frame in the parametrization plane. When the surface is described as a point set, the regression relies on Iteratively Reweighted Least Squares in the presence of noise and/or outliers. Going further than order 2 Rusinkiewicz[Rusinkiewicz04] introduces a way to compute curvatures and curvature derivatives but does not go beyond this order.
All these methods essentially perform per point estimation and do not take into account any global regularization constraints. For example, on planar or spherical surfaces curvature directions are erratic in the absence of smoothness constraints, which is required by many computer graphics applications. Hence researchers have turned to the design of consistent vector fields more suited to some application purposes.
Tangent Vector Field Design
The problem of tangent vector field design is to compute a smooth tangent vector field with user prescribed constraints at given points of the surface, while optimizing some regularity criterion. We only review some of the seminal papers of this field and we refer the reader to [Vaxman16] for an extensive review. Many vector field design methods focus on smooth -symmetric vector fields, also known as rotationally-symmetric direction fields (N-RoSy). N-symmetry directions are defined as the sets of directions invariant by rotations. Ray et al. [Ray08] generalize the notion of singularity and singularity index to these fields, and provide a way of controlling singularities on surface meshes. Lai et al. [Lai10] focus on casting the vector field design as a Riemannian metric design problem. Further smoothness constraints [Crane10, Knoppel13] and global symmetry enforcing constraints [Panozzo12] were proposed. N-RoSy were also extended to non necessarily orthogonal nor rotationally symmetric vector fields [Diamanti14] with appropriate differential operators [Diamanti15] and application to Chebyshev nets computation [Sageman-Furnas19].
The generalization of the Laplace-Beltrami operator to tangential vector fields and the subspace raised by its eigenvectors up to a given order may be used for regularizing vector field design[Brandt17]. Following this development of subspace methods for tangent field design [Brandt18], Nasikun et al. [Nasikun20] consider tangent vector field design and processing via locally-supported tangential fields leading to fast approximation and design algorithms.
Application of Tangent Vector Fields
Applications of tangential vector fields range from texture mapping and texture synthesis on surfaces (e.g. [Wei01, Knoppel15]) to fluid simulation (e.g. [Azencot15]). Shape reconstruction and quad-meshing have been tackled by combining a position field and a N-RoSy [Jakob15] yielding an extremely fast interactive algorithm.
In this paper we are also interested in computing per point sets of directions that are neither necessarily orthogonal nor rotationally symmetric but these directions stem from arbitrary order differential properties of the surface. Hence smoothness is obtained by continuity of the underlying mathematical surface.
3 Arbitrary order symmetric tensor
Our work makes extensive use of symmetric tensors and the theory developed by Qi for their spectral analysis [Qi05, Qi06, Qi07].
An -dimensional symmetric tensor of order is a -dimensional array such that given a set of indices with , for any permutation on ,
Notice that in Qi’s work, a distinction is made between a tensor and a supermatrix, that is a tensor’s representation in a given basis. For clarity sake, here we rather use the tensor term for both the object and its representation in a basis.
From now on, we will always consider since we are interested in tensors of differential properties related to surfaces of dimension 2. In this case, a symmetric tensor of order can be seen as a -dimensional array of vectors of length 2. By convention, we define any vector to be a symmetric tensor of order 1. Given a vector , we note the symmetric tensor of order generated by multiplying times using the Cartesian product. In particular, we set by convention.
Multiplying a symmetric tensor by a vector produces a symmetric tensor of order lowered by . Let be a symmetric tensor of order , it is composed of two symmetric tensors of order and and can be written . Then is the symmetric tensor of order such that :
The product reduces the order of by contracting on an arbitrary index. Since is symmetric, any index used for the contraction yields the same list of numbers in . While eigendecomposition of matrices is a well understood theory with important applications in Geometry Processing among others, the generalization to arbitrary order tensors is highly nontrivial. Qi introduced a new definition extending eigenvalues and eigenvectors to symmetric tensors [Qi05, Qi06, Qi07], as follows:
E-eigenvalues [Qi05] Given a symmetric tensor of order , if there exists and a vector such that:
Then is called an -eigenvalue of and is called an -eigenvector of . The set of satisfying (2) are the roots of a polynomial called the -characteristic polynomial.
4 Arbitrary order principal directions
Tensors can be used to write Taylor expansions. As an example, one can write the two first terms of a bivariate Taylor expansion. Given an arbitrary vector, the normal at and the Hessian of a function defined on with values in and twice continuously differentiable at :
Note that is symmetric, and so is since its order is 1. This expression can be generalized to higher orders Taylor expansions using tensors.
The Taylor expansion of order of a function from to is:
The differential tensor is now defined as:
For a function defined on with values in , times differentiable, the order differential tensor at is a order 2-dimensional tensor whose coefficients are:
with , for , and
If is differentiable then the order in which the differentiation is done is irrelevant and thus is symmetric. Using Definition 5 and , we have:
Hence using Equation 4, we get the Taylor formula for a times differentiable function:
The following Lemma shows that the gradient of each of the terms involved in the Taylor expansion can be obtained by contracting the corresponding tensor times, i.e. one time less than in the expansion. This will then allow us to search for extrema at different orders.
Let be a -dimensional symmetric tensor. Let be a vector.
For , and:
Assume that for , we have , then:
By induction, the property is true for all .
Given a times continuously differentiable function and , is the real symmetric order differential tensor of and the set of vectors such that and are real -eigenvectors of , i.e.:
First one can notice using equation 7, with that:
Differentiating w.r.t. radius gives:
Since we get:
Differentiating w.r.t angle to look for extrema:
which yields the following equations:
Since we look for real unitary vectors, we add the constraint that . Moreover, setting , we get and is a real E-eigenvalue of . The reverse holds using the same equations.
Given a times continuously differentiable function defined on with values in , the principal directions of order () of at are defined as the real -eigenvectors of its order differential tensor .
One should notice that Qi et al. defined several types of eigenvalues and eigenvectors [Qi05, Qi06, Qi07]. In particular, if an -eigenvector is real and if its corresponding -eigenvalue is also real, then is a -eigenvector and is a -eigenvalue. Then our -eigenvalues are also -eigenvalues in this setting.
Figure 2 illustrates the principal directions of order and for some illustrative functions at .
The above form is not very amenable to find the zero-crossings of the derivative of with respect to . Instead we propose to use its expression in the Wavejets basis (consisting of all functions for ) [Bearzi18], hence using polar coordinates:
with the Wavejets decomposition coefficients. Among other properties, and is and do not share the same parity (see [Bearzi18] for more details).
By identification of the coefficients in front of the powers of with the coefficients of the Taylor expansion using tensors, we have:
Given a function defined on with values in , times differentiable at , the principal directions of order of correspond to the real -eigenvectors of and they can be retrieved out of the Wavejets decomposition of by looking at the zeros of:
Since coefficients in the Wavejet decomposition of a real function, the zero-crossings of Equation 18 can be obtained by solving the following equation:
A more convenient form to find roots, for example using Newton’s method.
So far, we defined the principal directions as the eigenvectors of a tensor which we linked to the extrema of a function . The eigenvalues of the tensor can be also linked to this function. Calling an angle corresponding to an extremum of , the corresponding eigenvalue can be recovered as:
This follows directly from the last equality in 15.
Among all principal directions, we call Maximum principal directions (resp. minimum principal directions) the directions that correspond to local maxima (resp. local minima) of with .
5 Properties of the principal directions
Constraints on the principal directions
The functions (Definition 5) can be rewritten as . From the periodicity of cosine and sine functions, we deduce that:
If is even then , hence if corresponds to a maximum principal direction, is also a maximum principal direction.
is odd,, hence if corresponds to a maximum principal direction then corresponds to a minimum principal direction.
Number of principal directions
There are at most principal directions of order , since finding the zeros of amounts to finding the -norm roots of a real polynomial of order (obtained by multiplying Equation 18 by ). Since two maximum principal directions should be separated by one minimum principal direction and conversely, the number of maximum principal directions and the number of minimum principal directions should be equal and their maximum number is thus . Following the parity constraints on the location of maxima and minima above, for an even order, the number of maximum principal directions is necessarily even. For similar reasons, for an odd order, the number of maximum principal directions is necessarily odd.
Order 2 principal directions
The principal directions of order correspond to the classical principal curvature direction, however the maximum (resp. minimum) principal directions might not correspond to the maximum (resp. minimum) curvature directions.
Regularity and link with N-RoSy
Order principal directions can turn into a -RoSy, if and only if for all . The principal direction distribution can however not be arbitrary: this can be seen by considering order principal directions, fixing their 3 angles and trying to solve for the coefficients yielding a derivative for these 3 angles. This amounts to considering unknowns corresponding to the real and imaginary parts of the coefficients and (since ), linked by 6 equations given by and with . A rank analysis yields that the system is sometimes invertible and hence yields only the trivial solution of all coefficients set to . Sometimes the system has rank or depending on the chosen angles and hence yields a nontrivial solution. Hence not all kind of irregularities can be represented by the principal directions of the tensor. On Figure 2 we show a monkey saddle and an octopus saddle, whose respective order 3 and 8 principal directions correspond to and -RoSy when considering only maximum (alternatively only minimum) principal directions.
6 Directions Computation per point
We now propose to compute these directions on point set surfaces by using a local parametrization around each point. In most Geometry Processing applications, surfaces are known only through a set of points, sometimes connected into a mesh, and the surface in between should be inferred by regression to estimate principal directions.
In practice, to compute principal directions, we perform a local Wavejets surface regression with a high enough order ( in our experiments). More precisely, let be a point of the point set, let be its neighbors within some user-defined radius . We compute a local tangent plane by robust PCA and deduce a local parametrization plane on which we choose an arbitrary tangent direction which serves as the origin direction for computing the local polar coordinates for each . Then we solve for the Wavejets coefficients by minimizing:
with and . This Gaussian weight avoids sharp boundary effects and makes the Wavejets estimation smoother in the ambient space.
Depending on the type of data, we can use the -norm () when there is noise and outliers, or the norm () if the data has low noise. As shown by Levin [Levin98, Levin15], the coefficients obtained by Moving Least Squares are continuously differentiable if a norm is used. The use of the norm does not provide such a guarantee. Hence depending on the required smoothness one should use a different norm in the estimation procedure of Equation 20.
To illustrate the behavior of arbitrary order principal directions, we compute order principal directions on a synthetic surface controlled by its Wavejets coefficients (Figure 3). The number of maximum directions is either 1 or 3 (hence either or minimum principal directions).
On Figure 4 we show order 2 and 3 principal directions on a smooth synthetic surface evolving from a ridge to a smooth T-junction. One can see that order takes slowly over order 2, with a preferred direction.
Figure 5 illustrates the behavior of orders 2 to 7 principal directions on a sharp feature created by 5 intersecting planes. No order alone captures all intersection directions, but orders 5 and 7 contain the correct directions. Interestingly, order 7 degenerates to create only 5 maximum principal directions.
On Figure 6, we show the behavior of order 2 and 3 principal directions along the edges and corners of a synthetic cube. The length of the directions reflects the amplitude of the extremum. Order 3 accounting for some antisymmetry vanishes for edge points (which are symmetric) and order 2 vanishes on the corners (which are antisymmetric).
Real world models
Figures 1 and 7 show some of the principal directions computed by our method on the Armadillo model sampled with points. The principal directions orders are chosen manually according to local geometric features. As expected order accounts well for valleys, order for valley crossings, and order for some antisymmetry and monkey-saddle like features.
Figure 8 shows the principal directions of orders 2 and 3 computed at various locations. While it is obvious that sometimes order 2 is enough (side of the nose), order 3 is meaningful between the eyebrows and around the lips.
To show the robustness of our principal directions estimation we add some Gaussian noise to the data. Figure 11 shows examples of principal directions estimation of order 2 and 3 on a cube with various noise levels. Importantly enough, this robustness does not stem from the principal direction decomposition itself but from the coefficients estimation. Once the coefficients are estimated the principal directions are obtained through function maximum and minimum computations, which can only introduce numerical errors.
Our definition of principal directions is an extension of the principal curvature directions to higher orders, and as such are continuous on generic surfaces. For surfaces that are umbilical at every order such as a sphere or a plane, the directions will simply vanish (since no extremum will be found). Further smoothness constraints could be set locally to enforce some consistency, however this goes beyond the scope of this paper. Our approach also shares a limitation common to many local estimation methods: a radius should be chosen so that the analysis is local enough but also such that the neighborhood encloses enough points. The radius has indeed a direct impact on what is captured by the principal directions as illustrated on Figure 12. Importantly enough, our method does not perform better for curvature principal directions estimation than Osculating Jets [Cazals05] or APSS [Guennebaud07]. Our contribution lies rather on the extension to arbitrary orders than on the estimation accuracy itself. Finally, while it is appealing to consider that order- principal directions capture three ridges meeting at a single point, some precautions ought to be taken: if the ridges meet at a perfect T-junction, the maximum (or minimum) principal directions will not capture the 3 ridges directions because two maximum order- principal directions cannot be opposite. Figure 13 illustrates this behavior (see also Figure 5).
In this paper we introduced an extension of principal curvature directions to arbitrary differential orders and showed the link with the eigenvectors and eigenvalues of differential tensors. We showed that these new intrinsic direction fields are relevant on several shapes and can be computed efficiently even with sharp features. As a future work, some global smoothness constraints could be added, to enforce some surrogate direction computations on surfaces where the directions vanish. Many more applications of this new type of principal directions remain to be explored.