1 Introduction
The importance of various types of symmetry is evident while solving problems such as shape segmentation, mesh repairing, shape matching, retrieving the normal forms of 3D models [30, 60, 12], inverse procedural modeling [31], shape recognition [14], shape understanding [34], shape completion [47, 49], and shape editing [55, 33, 15, 20, 54, 10]. The problem of detecting intrinsic symmetry is shown to be an NPhard problem since it amounts to finding an intrinsically symmetric point for each point [37]. However, correspondences between the intrinsically symmetric points completely characterize the intrinsic symmetry of a shape since the intrinsic symmetry is a nonrigid transformation which can not be represented a matrix as opposed to the extrinsic symmetry which can be represented by a matrix [16]. We exploit the functional map approach that finds correspondences between functions instead of points [36]. Then, the pointtopoint correspondences can be recovered in , where is the number of vertices. We extend this framework for the detection of intrinsic symmetry. The functional map framework has already been used for detecting intrinsic symmetry in previous works ([24], [52]
). The main task in these approaches was to determine the functional correspondence matrix which transforms a function to its intrinsic image. Various constraints have been enforced on this matrix. It is not known yet, how many constraints are sufficient. Further, they solved a nonlinear optimization problem to estimate the functional correspondence matrix which makes the method slow. For the intrinsic symmetry detection problem, we show that the functional correspondences matrix is a diagonal matrix and a diagonal entry is
(), if the corresponding eigenfunction is an even (odd) function. This closedform solution makes our method faster.We determine if a particular eigenfunction is even or odd based on our following result. An eigenfunction is an even (odd) function if its restriction to the shortest length geodesic between two intrinsically symmetric points is an even (odd) function. Therefore, we need to find a few accurate pairs of intrinsically symmetric points which we find using our following results. If we directly pair points based on the similarity between their heat kernel signatures [48], we may get false pairs. For example, a pair of points on the tip of the index finger and the tip of the ring finger of the same hand of a human model. The reason is that, if two neighboring points are subjected to the same strength heat sources, then their heat diffusion processes will also be similar because of the very small sizes of the fingers with respect to the body size. However, we observer that the sign of a lowfrequency eigenfunction on the neighboring points is the same. Hence, we put high penalty for pairing two points if signs of first few lowfrequency eigenfunctions are the same for both the points. The models of the benchmark datasets are obtained by applying an imperfect isometry, so the theory only holds approximately. Furthermore, some of the triangles may not be Delaunay triangles. Therefore, we may not get accurate correspondences using the original eigenfunctions. Hence, we transform the original eigenfunctions to make them near perfect even or odd functions. Following are our main contributions.

We propose a novel approach to find a few accurate pairs of intrinsically symmetric points based on the following property of eigenfunctions: the signs of lowfrequency eigenfunction on neighboring points are the same.

We propose a novel and efficient approach for finding the functional correspondence matrix. We prove that the functional matrix for the intrinsic symmetry detection problem is a diagonal matrix and a diagonal entry is if the corresponding eigenfunction is an even (odd) function.

We propose a novel approach to determine the sign of an eigenfunction by showing that, if a manifold contains intrinsic symmetry and an eigenfunction is an even (odd) function, then its restriction to the shortest length geodesic between any two intrinsically symmetric points is an even (odd) function.

We transform the eigenfunctions to make them more invariant to selfisometry.
2 Related Works
Reflective symmetry detection methods are categorized by the types of the input data and the reflective symmetry present in the input data. The input data can be a digital image, point cloud, triangle mesh, etc. The main types of reflective symmetries are extrinsic and intrinsic. The well known methods for detecting the extrinsic symmetry are [29, 40, 28, 21, 45, 51, 46, 47, 26] and for detecting the intrinsic symmetry are [41, 37, 5, 6, 13, 17, 18, 22, 27, 41, 45, 60, 58, 56, 53, 44, 38, 42]. Furthermore, the intrinsic symmetry can be characterized as global and partial intrinsic symmetry. Our method finds global and partial intrinsic symmetry in triangle meshes. We discuss only the relevant works and suggest the readers to follow the excellent stateoftheart report in [32] and the survey in [25].
Ovsjanikov et al. detected intrinsic symmetry using the global point signature (GPS) [37]. The main claim was that the GPS embedding transforms the problem from intrinsic to extrinsic symmetry detection. They showed that the GPS embedding was robust to the topological noises. They found pairs of symmetric points by comparing the GPS of one point to the signedGPS of the other point. The time complexity of determining the sign (even or odd) of an eigenfunction is since they compared GPSs of all the possible pairs. Furthermore, the time complexity of the overall method is excluding the computation of eigenfunctions, where is the number of eigenfunctions used. We propose a more efficient method which takes () for detecting intrinsic symmetry. Furthermore, we observe that the approach by [37] is sensitive to the sign flip and eigenfunction ordering. Whereas, our method is independent of sign flip and ordering since we determine the sign of each eigenfunction independently from the others. Mitra et al. used a voting based approach to detect intrinsic symmetry and then applied transformation in the voting space to deform the input model to have perfect extrinsic symmetry [30]. Xu et al. used a generalized voting scheme to find the partial intrinsic symmetry curve without explicitly finding the intrinsically symmetric point for each point [57]. Xu et al. efficiently found pairs of intrinsically symmetric points using a voting based approach [56]. They factored out symmetry based on the scale of symmetry. However, they needed to tune a parameter depending on how much the input shape is distorted [56]. The methods by Zheng et al. ([60], [13]) also used voting approach. These voting based methods do not utilize spatial coherency. Therefore, they may produce pairs of intrinsically symmetric points which may not be spatially continuous. Furthermore, they may have high complexity due to a large number of possible pairs for the voting.
In [41], the authors proposed a nonconvex optimization framework to accurately detect full and partial symmetries of 3D models. However, the initialization severely affects the performance, and the complexity also is very high. Lipman et al. efficiently found the pairs of intrinsically symmetric points in point clouds and triangle meshes using the novel symmetry factored embedding technique [23]. However, their main bottleneck is the time complexity which is . Kim et al. used antiMöbius transformation to accurately find intrinsically symmetric pairs of points [16]. They first find a sparse set of pairs of intrinsically symmetric points. Then, they transform intrinsic symmetry into extrinsic symmetry using the Möbius transform. However, they required space for midedge flattening and for finding the antiMöbius transformation, where is the set of symmetry invariant points. Furthermore, false pairs in the first step may severely affect the overall performance.
3 Intrinsic Symmetry Detection
3.1 Background
Let be a compact and connected 2manifold representing the input shape. Let be the space of square integrable functions defined on . The LaplaceBeltrami operator on a shape is defined as
and admits an eigenvalue decomposition
. Here, are the eigenvalues and are the corresponding eigenfunctions. The eigenfunctions form a basis for the space . Therefore, any function can be represented as The functional map framework was first proposed in [36] for establishing pointtopoint dense correspondence between two isometric shapes. The main idea was to establish correspondences between the functions, defined on the shapes, rather than the points. This idea reduced the time complexity to . Let and be two shapes. Let be a linear mapping between functions defined on these shapes. That is, if and are two corresponding functions then . The mapping is represented by a matrix such that , where and are the representations of the functions and in the truncated bases and , respectively. Therefore, the main goal is to find the matrix which completely characterizes the dense correspondence between the two shapes.3.2 Functional Maps for Intrinsic Symmetry Detection
We extend the functional map framework for detecting the intrinsic symmetry which can be thought of as a shape correspondence problem where we have to find the correspondences between the points of the same shape rather than the points on the two different shapes. The functional map framework is applicable for two isometric shapes also. Therefore, we can use it to detect the intrinsic symmetry since a symmetric shape is a selfisometric shape [37]. The intrinsic symmetry of is defined as follows. If the points and are intrinsically symmetric, then and . We first find the mapping between the functions and then use it to find the correspondences between the intrinsically symmetric points. Let us consider the space and let be a functional map which maps the functions defined on the same shape. Then, this functional map completely characterizes the intrinsic symmetry if and , where are intrinsically symmetric functions, i.e. . Therefore, our goal is to find the matrix which characterizes the functional mapping for the intrinsic symmetry detection problem. For the problem of finding correspondences between two shapes, various constraints have been imposed on the matrix . Then, the matrix was the optimal solution of an optimization problem. However, we show that a closed form solution exists for the matrix for the problem of detecting the intrinsic symmetry, which we state as follows.
Theorem 3.1
Let be a mapping between the functions defined on a shape and characterizes the intrinsic symmetry of , i.e. such that . Then, the matrix representing is a diagonal matrix. , if , and , if .
Proof
The functions belong to the space . We can represent and as and . Since is a linear mapping, we have . Since is also a function in the space , it can be represented in the basis as , where . Therefore, we have that . Since , it follows that . Therefore, . Equivalently, we can write it as , where . According to [37], the eigenfunctions (corresponding to the nonrepeating eigenvalues) are selfisometry invariant with sign ambiguity i.e. , . Furthermore, the functional map completely characterizes the intrinsic symmetry. Therefore, , if , and , if , . Since the eigenfunctions form an orthogonal basis for the space , we have that if . Therefore, . Therefore, , if . Hence, the matrix is a diagonal matrix. Furthermore, , if , and , if .∎
Therefore, the problem of determining whether or is equivalent to determining whether or , . It is observed that if then is a symmetric or an even function and if , then
is an antisymmetric or an odd function in the intrinsic sense. We can not apply the definition of the vector space here, since the domain of the eigenfunctions is not a vector space. A function
is an even function, if . This definition is not valid for the functions defined on the manifolds, since if , then it may not always be true that . However, we generalize the following property of vector spaces to the manifolds to determine the sign of eigenfunctions. Let be a function symmetric on and be the line segment joining the mirror symmetric points and . Then, it is trivial to show that the restriction of the function on the set is also symmetric. Here, we also observe that the set is symmetric about any of its coordinate axes and the set is also symmetric. We formally generalize these results on manifolds as follows.Theorem 3.2
Let be a compact and connected 2manifold. Let there exist a selfisometry on . Let be two points which are intrinsically symmetric, i.e., and . Let be the shortest length geodesic curve between the points and such that and . Then, and .
Proof
Let . We have to show that . Since is an isometry, according to (Proposition 16.3, [11], Chapter 3, p91 [35]) maps a shortest length geodesic on to a shortest length geodesic on . Therefore, is also a shortest length geodesic. Now, we have and . Therefore, is the shortest length geodesic between the points and such that and . Since there can only be a single shortest length geodesic curve between two points (except continuous symmetry, like sphere), both the geodesics and trace the same path. However, their start and end points are flipped. Therefore, . Since the selfisometry is an involution, i.e. , we have .∎
The intuitive is that if a shape is intrinsically symmetric, then the shortest length geodesic curve between any two intrinsically symmetric points is also intrinsically symmetric. This result helps us to determine the sign of the eigenfunctions of the LaplaceBeltrami operator. First, we show that the result holds true if we restrict the eigenfunctions on the shortest length geodesic curve between the intrinsically symmetric points. The restriction of on a curve is defined as . Since, such that th eigenvalue is non repeating, each eigenfunction is always either an even (sign) or an odd (sign) function. Hence, if restriction of the eigenfunction on the shortest length geodesic between the intrinsically symmetric points has sign (), then the sign of is also .
Proposition 1
Let be two intrinsically symmetric points and be the shortest length geodesic curve between the points and . Then,
Proof
Using Theorem 3.2, we proceed as . We know that . Hence, . ∎
We apply the above result to find whether an eigenfunction is even or odd as follows. We first determine a set of candidate pairs of intrinsically symmetric points. Then we find the shortest length geodesic curve between each pair. Then, for each eigenfunction , we determine if the restricted eigenfunction is an even or an odd function for each pair.
3.3 Computation of Eigenfunction of LaplaceBeltrami Operator
Let be a triangle mesh, where is the set of vertices, is the set of faces, and is the set of edges. We follow the method in [39]
to find the eigenvalues and the corresponding eigenvectors of the LaplaceBeltrami operator in the discrete settings. The discrete LaplaceBeltrami operator is defined by the matrix
. Both and are of size and are defined as follows., and, (the area of the shaded region in the inset figure). Here, is the area of the face . In the discrete settings, we denote eigenfunctions by , and are the solutions of the generalized eigenproblem . Here, . We denote the value of at the th point or vertex by .
3.4 Detecting Pairs of Intrinsically Symmetric Points
In order to detect the intrinsic symmetry, according to Theorem 3.1, we need to find the matrix defined as , if , , if is an even function, and , if is an odd function. According to Proposition 1, the eigenfunction is an even function (odd) if its restriction on the shortest length geodesic between intrinsically symmetric points is also an even function (odd). Therefore, our first task is to find a few accurate candidate pairs of intrinsically symmetric points.
We find the heat kernel signature (HKS) feature points [48] on the given mesh . HKS feature points are the local maxima of the function defined on the shape for all vertices , . We use for our experiments. Let be the set of HKS feature points, where, , and . We set as defined in [48]. To find the pairs of intrinsically symmetric points, we do not directly match the HKS descriptors. The reason is that this strategy may fail very frequently. In Fig.1(a), we show the detected HKS feature points and the pairs detected by direct matching of their HKS descriptors on the Kids model [9]. In Fig.1(b), we show the zoomed pairs of symmetric points on the feet and the hands for better visualization. We observe that the tips of two fingers of the same hand got paired. The reason behind getting such matches is that if two neighboring points are subjected to a same strength heat source, then their heat diffusion processes will be similar. Therefore, their HKS descriptors will be similar. Hence, we have to assign a high cost for pairing two neighboring points. Furthermore, determining if two points are neighbors requires us to find the geodesic distance between each possible pair. This can be a costly process since there can be a large number of possible pairs. We propose a fast approach to determine if two points are neighbors based on the following observation.
Observation 1. Let and be any two neighboring points in . Then, for low frequency eigenfunctions, i.e., for small .
Here, and are the th and th columns of the matrix . The th element of the vector is the sign of the th eigenfunction on the th point of the set . We give an intuitive understanding of this observation based on the nodal domains of the eigenfunctions. The nodal set of the eigenfunction is the set . A nodal domain is a component in the set . The set is the collection of components or segments on the shape which are separated by the set . The value of the eigenfunction in any of its nodal domain is either positive or negative [59, 43]. Therefore, if two points lie in the same nodal domain, then the eigenfunction will have the same sign on both the points. Now, the neighborliness of two points depends on the size or the area of the nodal domain. According to the Courant’s
nodal domain theorem, the number of nodal domains of is less than [59]. Therefore, the size of the nodal domains remains significantly large for low frequency eigenfunctions. Hence, neighboring points remain in the same nodal domain for all the low frequency eigenfunctions. Therefore, the sign of all low frequency eigenfunctions remains the same on neighboring points. We choose eigenfunctions corresponding to the first lowest eigenvalues and corresponding eigenvectors for our experiment.
Hence, we assign a high cost for pairing the points and , if their sign vectors and are the same. Let be the heat kernel signatures matrix of the detected HKS feature points, where we choose
steps. We define the affinity matrix
such that and . Here , if and , if , and is any large positive constant. We now pair these points such that if is intrinsically symmetric point of the point , then should be the intrinsically symmetric point of the point . We achieve this by representing the matching by a matrix , where and , if the points and form a pair and 0, otherwise. Now, we enforce the constraints and to achieve onetoone matching, where is a vector of size with all elements equal to 1. We get many points which can not be paired. Therefore, we cap the number of pairs by which we represent by the constraint . Further, to make it feasible, we modify the onetoone matching constraints to and . Now, we frame the problem of pairing the points in the below optimization problem.(1) 
We note that, problem (1) is equivalent to the below linear assignment problem.
(2) 
Here, the vector , , is the vector of size with all elements equal to 1, and
is the identity matrix of size
. The matrix is matrix and defined such that the th row is the circular shift, by elements on the right, of the row vector of size with the first elements equal to and the last elements equal to 0. The time complexity of this problem is exponential in the number of variables. However, the size of our problem is very small. In all our experiments . We use the MATLAB function intlinprog to solve this problem which takes seconds.3.5 Determining the Sign of Eigenfunctions
Proposition 1 states that the eigenfunction is an even (odd) function, if its restriction on the shortest length geodesic between any two intrinsically symmetric points and is an even (odd) function. Let , be the set of detected pairs of intrinsically symmetric points. We find the shortest length geodesic curve between two intrinsically symmetric points using [50]
with approximate setting (Dijkstra’s algorithm), since the exact geodesic curve may not pass through the vertices of the mesh which may require us to perform interpolation for calculating the values of
for . Let be the restriction (vector of size equal to the number of vertices in the geodesic) of the eigenfunction on the shortest length geodesic curve between the intrinsically symmetric points and . Then, the sign of eigenfunction is equal to , if and equal to , if . We do not consider the eigenfunction if . Equivalently, we define the diagonal entries of the functional correspondence matrix as . In Fig. 2(a) and (c), we show the eigenfunctions and , respectively, with the geodesic curves for two pairs of detected intrinsically symmetric points on a shape from Kids dataset [9]. In Fig. 2(b) and (d), we show the functions for and . We observe that and .One can directly use the values of an eigenfunction on the intrinsically symmetric points to find the sign instead of checking it on the geodesic between these points. However, this approach could be sensitive to the noise. If the value of an eigenfunction at the feature point has changed due to noise, then the pointbased method will fail. Whereas, it is less likely that due to the noise the value of an eigenfunction will be changed at all the points on the geodesic. Our geodesic based method will detect the sign correctly due to averaging of signs.
3.6 Correcting the Eigenfunctions
The models of the benchmark datasets are obtained by applying an imperfect isometry, so the theory only holds approximately. Furthermore, some of the triangles may not be Delaunay triangles and the eigenfunctions are sensitive to the change in the triangulation of the mesh. Therefore, all the eigenfunctions may not be perfectly even or odd which may give the erroneous symmetry detected in Section 3.7. Consider Fig. 3(a), where the eigenfunction is not perfectly even on the legs. We transform the eigenfunctions such that they preserve the pairs of intrinsically symmetric functions. We extend the framework in [19]. Let , and . Let be the transformed basis obtained by applying the linear operator on the basis . Then, we impose the constraints and so that the new eigenfunctions admit to the original eigenfunction decomposition problem as proposed in [19], where for any matrix .
Now, let be two functions such that and are intrinsic images of each other. That is, and are equivalent. Let and be the discrete versions of and , respectively. Let and be the representations of the functions and in the transformed basis , respectively. We want the transformed basis such that . Let and be the matrices representing (bidirectional) pairs of intrinsically symmetric functions. We formulate the below optimization framework to find the transformation matrix .
subject to  (3) 
Here, follows from the fact that the transformed basis is an orthogonal basis. Here, the set is the special orthogonal group . Hence, we solve the below optimization problem.
(4) 
We use the Riemanniantrustregion method, proposed in [1, 2], to solve this optimization problem. We use the manopt toolbox [7] for this purpose. We provide the Riemannian gradient and Hessian of this cost function in the supplementary material. We empirically found the optimal to be equal to in our experiments. We choose the functions and such that, at the point and 0 everywhere else, and at the point and 0 everywhere else. Here, and are intrinsically symmetric points. In Fig. 3(a) and (b), we show the effect of correction on (). We observe that , which was not perfectly symmetric on the legs and the belly, becomes more symmetric. The large blue patch on the belly also got moved to center which was more towards left before correction. Here, the value of eigenfunction is color encoded, more blue implies more negative and more yellow implies more positive. In Fig. 3(c), we show the restriction of on the geodesic between the two symmetric points which becomes more symmetric after the correction.
3.7 Dense Intrinsically Symmetric Correspondence
Let be the function such that at and 0 elsewhere. Similarly, let be the function such that at and 0 elsewhere. Let and be their basis representation. Then, if the point and are intrinsically symmetric then . Which is equivalent to if we consider all points. Now, if is equal to the identity matrix of size , then , if point and form a pair of intrinsically symmetric points, and 0 otherwise. Now following [36], the intrinsically symmetric point of is the nearest neighbor of the th column of the matrix among the columns of the matrix . The obtained correspondences are continuous as shown in [36]. Our method is invariant to the ordering of the eigenfunction since the sign of and only depend on the eigenfunction . In Fig. 3(d), we show the detected symmetry on a geodesic on legs before and after correction.
MT  BIM  OFM  GRS  Our  

Time (min)    360  60  24  8 
MT  BIM  OFM  GRS  Our  

Corr rate (%)  82.0  84.8  91.7  94.5  97.5 
Mesh rate (%)  71.8  76.1  97.2  98.6  100 
Corr rate (%)  Mesh Rate (%)  
MT  BIM  OFM  GRS  Our  MT  BIM  OFM  GRS  Our  
Cat  66.0  93.7  90.9  96.5  95.6  54.6  90.9  90.9  100  100  
Centaur  92.0  100  96.0  92.0  100  100  100  100  100  100  
David  82.0  97.4  94.8  92.5  96.2  57.1  100  100  100  100  
Dog  91.0  100  93.2  97.4  98.8  88.9  100  88.9  100  100  
Horse  92.0  97.1  95.2  99.4  97.3  100  100  87.5  100  100  
Michael  87.0  98.9  94.6  91.4  96.5  75  100  100  100  100  
Victoria  83.0  98.3  98.7  95.5  96.2  63.6  100  100  100  100  
Wolf  100  100  100  100  100  100  100  100  100  100  
Gorilla    98.9  98.9  100  100    100  100  100  100  
Average  85.0  98.0  95.1  94.5  97.8  76  98.7  92.6  100  100 
4 Results and Evaluation
4.1 Time Complexity
Let be the number of vertices and be the number of eigenfunctions used. The feature points are the local maximums of . It requires us to find 2ring neighborhoods of each vertex. We use the halfedge data structure which requires time. Hence, the overall time for finding the feature points is . The optimization problem in Eq. (4) takes when solved using Riemannian trust region method. We use the ANN library [4] to find the nearest neighbor for each column of the matrix among the columns of the matrix which takes time . Hence, the time complexity is , since . In our experiments (empirical) and . The time complexity of computing the smallest eigenvalues and corresponding eigenvectors of symmetric matrix is which is common to all spectral decomposition based methods.
4.2 Comparison
Evaluation Metrics
. We use the following evaluation metrics to compare the results of our method to that of the stateoftheart methods as defined in
[16]. Correspondence rate: Let be the ground truth correspondence and be the estimated correspondence, then the correspondence is called true positive if the geodesic distance between the points and is less than as used in [16]. The correspondence rate is the fraction of true positive correspondences in the total estimated correspondences. Mesh rate: The mesh rate is the fraction of shapes for which the correspondence rate is more than in the total shapes as used in [16]. Time Complexity: Total time required for computing symmetry for each shape in the given dataset.
Datasets. We evaluate our approach on the SCAPE [3] and TOSCA [8] datasets. The SCAPE dataset contains 71 models. Each model in SCAPE dataset contains 12500 vertices and 24998 faces. The TOSCA dataset contains 80 models. On an average 20 ground truth intrinsically symmetric correspondences provided for each model in the datasets SCAPE and TOSCA. In the Fig.4, we show a few results of the proposed approach on both the datasets. We have only shown the sparsely detected correspondences for better visualization.
Comparison Methods. We compare the results of our approach on the datasets SCAPE and TOSCA with the four methods Möbius transformation voting (MT) [16], Blended Intrinsic Maps (BIM) [17], Properly Constrained Orthogonal Functional Map (OFM) [24], and Group Representation of Symmetries (GRS) [52].
Discussions on the comparison. In Table 2, we present the total time required for detecting the intrinsic symmetry in all the models of the TOSCA dataset for all the methods. We observe that our method is the fastest method on the TOSCA dataset. Our method takes around 6 seconds for each model whereas the method BIM takes around 270 seconds, the method OFM takes around 45 seconds, and the method GRS takes around 18 seconds. Our method takes 4.2 minutes to compute intrinsic symmetry in all the models of the SCAPE dataset. The possible reasons for our faster computation include finding the correspondence matrix using a closed form solution and determining the sign of eigenfunctions by computing the approximate shortest length geodesic curves between two intrinsically symmetric points. In Tables 2 and 3, we present the correspondence rate (Corr rate) and the mesh rate for all the methods for all the models of the SCAPE and the TOSCA datasets, respectively. The mesh rate for our method is equal to 100% and the correspondence rate is equal to 97.8% which is very close to the stateoftheart correspondence rate 98% of the method [17]. However, the average computation time for each mesh is around 270 seconds for the method [17], whereas it is around 6 seconds for our method. We achieve the stateoftheart performance on the SCAPE dataset.
Effect of holes. In Fig. 5, we show the detected intrinsic symmetry in the partial model from the SHREC16 [9] dataset. Here, the partial shape is obtained by making holes in the original shape such that it contains 90% area of the main shape. We observe that our method is invariant to significant holes.
5 Conclusions
We have presented a fast and an accurate algorithm for detecting intrinsic symmetry in triangle meshes. We showed that the functional correspondence matrix is diagonal and a diagonal entry is if the corresponding eigenfunction is even (odd). We showed that the restriction of an even (odd) eigenfunction on the shortest length geodesic between any two intrinsically symmetric points also is an even (odd) function. This result has helped us to derive a closed form solution to find diagonal entries of this matrix. We achieved stateoftheart performance on the SCAPE dataset and second best on the TOSCA dataset. We achieved the best time complexity. Furthermore, our approach is invariant to the ordering of eigenfunctions and robust to the presence of holes in the input mesh.
Our method is limited to the intrinsic reflective symmetry. It can not find the other types of symmetries such as rotational symmetry. We would like to extend our approach to more general symmetries. Our approach may fail to detect intrinsic symmetry in nonconnected manifolds. As future work, we would like to extend the functional map to detect intrinsic symmetries in nonconnected manifolds.
Acknowledgment: R. Nagar was supported by the TCS research Scholarship.
Appendix
Computing Riemannian gradient and Hessians of the Problem defined in Eq. (4) of the paper.
Consider the optimization problem defined in Eq. (3) of the main paper. Which is
subject to  (5) 
Here, follows from the fact that the transformed basis is an orthogonal basis. We have since . We observe that the set is the special orthogonal group . Therefore, we solve the optimization problem
(6) 
Here, we choose . Now, let . Where, , and , , and are defined as , , and .
Let, be a scalar function on and defined as . Let be its gradient (also called classical gradient). Since the function is the restriction function of the function on , i.e., and , the Rimannian gradient of the function is defined as [2]. Therefore, in order to find the Riemannian gradient of the function , first we need to find the classical gradient of the function .
Now let . Where, , , and defined as , , and . Where, and . We follow [19] to find the gradients of the function and which are defined as and . Here, the matrix , is a vector of size with all elements equal to 1, and are the diagonal entries of the matrix . Now, we find the gradient of the function as follows.
Now we have that and . We use the definitions defined in [61] to find that . Therefore, Therefore, we have
Since , therefore we have that
Here, . The analytical Reimannian Hessian contains many matrix multiplications (as shown below). Therefore, we use the numerical approach option in the manopt toolbox [7] to find the Reimannian Hessian.
The Riemannian Hessian of the function at in the direction is defined as