1 Introduction
What is it indeed that gives us the feeling of elegance in a solution, in a demonstration? It is the harmony of the diverse parts, their symmetry, their happy balance.
Henri Poincaré
Symmetry is a common pattern that appears ubiquitously in the world. Majority of living things, including humans, animals, and plants (e.g. flowers) have some form of symmetry. It is also a widely employed design principle in manmade objects, including buildings, furniture, vehicles, to name a few.
Due to its wide applicability, symmetry information has been exploited in computer graphics to help address many geometry processing tasks, including shape matching [28], segmentation [8], editing [18], completion [26], and understanding [19]. In these application systems, symmetry detection is usually an integral component, so efficient symmetry detection, especially achieving realtime efficiency, has significant benefits, e.g., to avoid impeding realtime performance in 3D acquisition/reconstruction, and for improved user experience in interactive shape editing by reducing users’ waiting time.
In geometry processing, researchers mainly focus on symmetry in the spatial domain, including extrinsic symmetry defined in Euclidean space or intrinsic symmetry defined in nonEuclidean (manifold) space. Extrinsic symmetry refers to shape invariance w.r.t. rigid (including reflectional) transformations. Compared to extrinsic symmetry, intrinsic symmetry is more difficult to detect due to its much larger solution space, as discussed in previous work [30, 10, 22].
Given a shape model, intrinsic symmetry detection aims to estimate a selfhomeomorphism on the manifold that preserves the geodesic distance between each point pair. Usually, the manifold is discretized as a triangle mesh, and algorithms predict a pointwise correspondence matrix to represent symmetric pairs.
Stateoftheart methods for intrinsic symmetry detection is largely based on embedding the symmetry space to some lower dimensional spaces, such as Möbius transformation space [10], Global Point Signature (GPS) space [22], or functional map space [30, 20] and performing random sampling or voting, which suffers from high computational cost and uncertainty of results due to randomness.Despite great effort, efficient and robust detection of intrinsic symmetry remains challenging. Existing stateoftheart methods typically take several seconds or longer to analyze one shape [20], and may produce unreliable results for difficult cases. To address this, we design the first learningbased intrinsic symmetry detection method to handle the intrinsic symmetry problem. Like most existing works, we focus on intrinsic reflectional symmetry as it is most common in the real world. Learning intrinsic symmetry directly on meshes is challenging, due to their irregular connectivity, and the global nature of symmetry. We simplify this problem when designing the deep neural network, such that it does not directly process the edges and faces of the mesh, but instead takes intrinsic features as input.
Similar to [20]
, given an input mesh, the symmetry mapping defined on it can be represented using a functional map, or equivalently using a functional map matrix. LaplaceBeltrami eigenfunctions can be extracted to provide a basis for analysis. In the matrix, entries corresponding to eigenfunctions associated with nonrepeating eigenvalues are determined by the sign (odd or even) of the eigenfunction after the symmetry mapping is applied. Stateoftheart work
[20] determines the sign of the eigenfunction through random sampling. Although it is faster than previous methods, it is still slow (requiring several seconds for a typical mesh), and may not be sufficiently robust.To address this, we train a deep neural network to predict the sign of each eigenfunction. We design SignNet, a deep neural network for sign prediction, that in addition to the eigenfunction to be predicted, also takes the first few Laplacian eigenfunctions as input, which effectively encode intrinsic descriptions of the mesh characteristics, while avoiding coping with mesh connectivity explicitly. To make the computation more efficient, we truncate LaplaceBeltrami eigenfunctions in the spectral domain to lower the dimension of the representation. After predicting the entries of the functional map matrix, we apply a postprocessing to further finetune the results (addressing issues such as nearidentical eigenvalues and slight nonisometry) and convert the functional map to onetoone point correspondence.
The main contributions of this work are summarized as follows:

We propose the first learningbased method to detect global intrinsic reflectional symmetry of shapes. Compared to previous works, our method achieves realtime performance, much more efficient than stateoftheart (over 100 times faster). Our method also achieves higher accuracy, and is more robust.

The intrinsic symmetry problem is formulated using a functional map. To compute the entries of the functional map matrix, we design a novel deep neural network to determine the sign of each eigenfunction. Our network that predicts the sign of individual eigenfunctions using intrinsic features is compact and generalizes well to new shapes.
2 Related Work
Intrinsic Symmetry Detection.
Many previous works cast their attention in intrinsic symmetry detection tasks. Ovsjanikov et al. [22] formulate the concept of intrinsic symmetry. They propose to use the Global Point Signature (GPS) [25] to transform the intrinsic symmetry of shapes into the Euclidean symmetry in the signature embedding space. The symmetry is detected by first deciding the sign sequence of eigenfunctions and then finding the nearest neighbors of the GPS of points. Xu et al. [32, 31] extend the concept of intrinsic symmetry and introduce partial symmetry where some parts of an object are symmetric. In this paper, we focus on global intrinsic symmetry due to its wide applicability, as most research in this area does.
To address the large solution space, some works parametrize intrinsic symmetry to some lower dimensional space. A highly related problem is investigated by Mitra et al. [17] who propose a method to symmetrize imperfectly symmetric objects. They find intrinsically symmetric point pairs by voting, and then parametrize possible transformations in a canonical space and optimize the transformation to align symmetric pairs. Kim et al. [10] use another parametrization of symmetry transformations. They find a set of symmetric points by detecting critical points of the Average Geodesic Distance (AGD) function, and generate candidate antiMöbius transformations that can describe the symmetric transformation by enumerating subsets of the points. As a votingbased method, the runningtime could be an issue. Also, the use of antiMöbius transformation limits the method to handle genuszero manifolds. Lipman et al. [13] detect symmetry by finding the orbit of points under symmetric transformations. A fuzzy pointwise symmetry correspondence matrix is generated randomly, based on which they further compute a Symmetry Factored Embedding (SFE) and Symmetry Factored Distance (SFD). However, the computation of the correspondence matrix is very timeconsuming.
The relationship between symmetry groups and matrices is studied in [13]. Similarly, Wang et al. [30] establish a homeomorphism between the symmetry group and the multiplication group of matrices. They introduce the functional map to parametrize the symmetry and limit the search space of matrix entries to the subspace of eigenfunctions. However, due to the noise in manifolds and errors during numerical calculation, eigenvalues which are ideally identical are usually calculated as different values in practice, making it difficult to determine true subspaces and resulting in poor symmetry detection. As described in [30] the continuity and sparsity make functional maps a suitable representation for correspondence problems, including intrinsic symmetry. Functional maps are also used in the work [20]. As also mentioned in [22], eigenfunctions are invariant under selfisometry, apart from sign ambiguity, and the diagonal entries of the functional map matrix are related to the sign of corresponding eigenfunctions. To decide the signs, landmark symmetric point pairs and the geodesic lines connecting them are selected. Nagar and Raman [20] design an explicit solution to this problem, but since their method depends on the landmark pairs, the random sampling requires a tradeoff between robustness and computation complexity. Compared to stateoftheart methods, our learningbased method avoids explicit sampling and is much faster (over 100 times faster for a typical example), achieving realtime performance. It circumvents the randomness of sampling, and is thus more robust and accurate.
Shape Analysis with Deep Learning.
Our method learns the properties of eigenfunctions defined on manifolds using deep neural networks. We review research that defines neural networks on 3D shapes. With increasing requirements of faster and better analysis of 3D geometry, recent works exploit learning on shapes with deep learning. Boscaini et al.
[4]design an anisotropic convolutional neural network to learn correspondences across shapes. Masci et al.
[15] also design a network in the spatial domain.Alternatively, another category of work constructs neural networks in the spectral domain. Bruna et al. [7] introduce a spectral convolutional layer on graphs, which can be viewed as a general form of meshes. As described in [6], a fundamental problem of spectral convolution is its dependency on the basis, making it difficult to be generalized to different domains. To mitigate this, Yi et al. [33] propose a network architecture to synchronize the spectral domains and then perform convolutional operations on it. Rodolà et al. [24] design a fully connected network to learn features that can generate functional map matrices; however, fully connected networks may suffer from overfitting, and their method requires pointwise correspondences to train the model which is not required by our method.
In this paper, we aim to detect intrinsic symmetries for general shapes, where the topology and triangulation may vary significantly. We therefore prefer a network architecture that can run robustly in crossdomain settings. To circumvent irregular connectivity of meshes, we take as input intrinsic geometric features defined on mesh vertices, namely Laplacian eigenfunctions, which implicitly carry connectivity information, but avoid coping with complex mesh connectivity. Our method thus handles general mesh topology and has good generalizability.
3 Representing Intrinsic Symmetry by Functional Maps
To cope with discrete and highdimensional pointwise correspondence matrices, we use functional maps to represent the selfmapping. The functional map was introduced in [21], first used to describe the correspondences between two shapes. In our problem, a selfisometry can also be viewed as a mapping between two identical shapes . And the mapping can naturally introduce a bijective transformation in the squareintegrable space , such that
(1) 
Three remarks are presented in [21] w.r.t. functional map , which are summarized below.
Proposition 3.1.
The original selfisometry can be recovered from .
Proposition 3.2.
Proposition 3.3.
Assume that is equipped with an orthogonal basis . For each , the functional map can be represented by a matrix , with entries . For each function with coefficient vector , the coefficient vector of map of is .
Following the choice of [21], we use the eigenfunctions of LaplaceBeltrami operator as the basis. For a mesh with vertices, the discrete Laplacian operator on the mesh is defined as an matrix [16]
(2) 
where contains vertex weights, with equal to the Voronoi area of the vertex (i.e., a third of the sum of onering neighborhood areas). is the sparse cotangent weight matrix, is the degree matrix which is a diagonal matrix with diagonal entries .
The aforementioned eigenfunction basis are the solution of , where is a diagonal matrix whose diagonal entries are eigenvalues in ascending order, . For efficiency and robustness, we take the eigenfunctions corresponding to the first smallest eigenvalues (). Note and the corresponding trivial eigenfunction is ignored.
4 Method
4.1 Overview
Our goal is to detect the intrinsic symmetry of shapes. An intrinsic symmetry is the selfhomeomorphism of a smooth surface , written as , which preserves geodesic distances
(3) 
Instead of directly computing a pointwise correspondence matrix, we use a functional map to describe this selfmapping. The functional map defined on the Laplacian basis is represented as a matrix, which is the coordinate transformation matrix w.r.t. the source and target bases. Since the LaplaceBeltrami operator is invariant under isometric transformation, the eigenfunction space stays invariant under selfmapping. Therefore, the matrix corresponding to the selfmapping is a block diagonal matrix. More specifically, only one of the two cases holds for eigenfunction associated with nonrepeating eigenvalues (see also in [22]):

, where is called positive.

, where is called negative.
Therefore, the entry in the matrix corresponding to each nonrepeating eigenfunction should be either +1 or 1, depending on whether is positive or negative. Fig. 1 shows the pipeline of our method. We train a network called SignNet to distinguish the sign of eigenfunctions under reflectional symmetry. To provide sufficient guidance, we train the network in a supervised fashion. Given an input shape, once the signs of Laplacian eigenfunctions are predicted using our SignNet, we can fill in the diagonal of the initial functional map matrix with +1 and 1. However, in most of the time the intrinsic symmetry is imperfect, where some areas experience nonisometric deformation. Moreover, there could also be eigenfunction spaces associated with repeating eigenvalues, in which condition the diagonal matrix cannot fully express the mapping. Therefore we use a postprocessing step to finetune the initial matrix to obtain the final matrix .
4.2 Learning Intrinsic Symmetry
Diagonal entries of the functional map matrix.
As described in Section 3, we detect intrinsic symmetry by computing the functional map matrix . Although the dimension of functional map matrix is already much lower than the pointwise correspondence matrix, predicting the full matrix is still challenging for optimization methods or deep networks since there are still too many variables. We further utilize the sparse structure of the symmetry functional map to make it much easier to predict the mapping.
First we need to clarify an important property about eigenfunctions under a symmetry mapping. The Laplacian eigenfunctions associated with nonrepeating eigenvalues are invariant under intrinsic symmetry mapping, only with sign ambiguity. This property is formally presented as follows:
Theorem 1.
For an intrinsic mapping defined in Equation (3) and a Laplacian eigenfunction associated with a nonrepeating eigenvalue , .
Proof.
As a wellknown property of LaplaceBeltrami operator, the operator is invariant under isometric transformation , i.e.
(4) 
where is the transformation on introduced by .
Let , we can then obtain , which means is also an eigenfunction with as its eigenvalue. We have , so is in the same eigenfunction space as . Given that is nonrepeating and is isometric, then we have . ∎
From the proof of Theorem 1, we can know that and are in the same eigenfunction space. In particular, if this eigenvalue is nonrepeating, we can denote , where .
Based on this property of eigenfunctions, we further exploit the relationship between and the functional map matrix .
Theorem 2.
If all eigenvalues are nonrepeating, then , if , or otherwise.
Proof.
. If , as defined in Theorem 1, ; if , since , then . ∎
Theorem 2 shows that is a block diagonal matrix, where the entry associated with the nonrepeating eigenvalue is .
Predicting the sign of eigenfunctions.
So the problem is much simplified and disentangled, such that we can derive the whole matrix by separately considering the sign of each eigenfunction. The visualization of the eigenfunctions on shapes is shown in Fig. 2. In this illustration, red areas represent positive values and blue areas are negative values. The first row shows eigenfunctions that satisfy (i.e. positive cases), and the second row includes shapes associated with a negative eigenfunction. From the figure it can be seen that symmetric patterns are rather obvious: positive functions appear symmetric under reflectional symmetry, while negative ones are asymmetric. Nagar and Raman [20] propose a samplingbased method to decide the sign of the function. However, this approach depends on random samples, which takes a long time to compute and may occasionally fail. In this paper, we propose to train a neural network to learn the sign of eigenfunctions.
Fig. 1 illustrates the pipeline of our method. Given an input shape, we first compute its Laplacian matrix and the first eigenfunctions (excluding the trivial eigenfunction associated with eigenvalue ). Instead of taking the whole shape along with the eigenfunctions as input, which requires the neural network to deal with irregular mesh connectivity, our neural network (SignNet) processes each eigenfunction separately. Assuming the th eigenfunction is being processed, the input to the network includes not only , but also the first eigenfunctions , which capture the characteristics of the input mesh and are also intrinsic.
The output of SignNet is a 2dimensional softmax vector. The distributions of the eigenfunctions on the mesh can reflect the pattern of the sign to a great extent. Here we do not use the original positions of vertices as input since they are extrinsic features. In contrast, the first dimensions of Laplacian eigenvectors are intrinsic, thus more suitable for detecting intrinsic symmetry.
To visualize this, in Fig. 1(b), we plot the embedding of vertices taking the first three eigenfunctions evaluated at vertex as vertex coordinates and as the color (blue to red means small value to large value). It can be observed that the shapes of the embedding are extrinsically symmetric even if the mesh is only intrinsically symmetric. Also, we can see that those eigenfunctions are either symmetric or asymmetric, corresponding to positive or negative eigenfunctions.
In the SignNet neural network, we use MultiLayer Perceptrons (MLPs) to extract vertex features with increasing complexity. Then a maxpooling is applied on all vertices to aggregate global features. Following the pooling layers are several fullyconnected layers with decreasing numbers of channels. In the end, the network predicts a twodimensional score vector
, i.e.(5) 
such that the sign is predicted to be negative if , or positive if , for . Let be a twodimensional vector, if , and if
. The loss function is designed as the
CorssEntropy between and groundtruth sign label , formulated as(6) 
4.3 Training Data
Our learningbased approach requires a dataset for training. For this purpose, we choose as training set a fusion of sets SHREC 2007 [9], elephant, and flamingo [27], which contains nonrigidly deformed shapes which are intrinsically symmetric. As a shape retrieval dataset, SHREC dataset includes shapes of different categories. Meanwhile, they are also independent from the test sets (SCAPE [1] and TOSCA [5]). This ensures fairness and evaluates the generalizability of our learningbased approach.
We built a simple user interface to visualize and manually label each Laplacian eigenfunction as either positive, negative or neither, under reflectional symmetry transform. Neither cases happen for shapes which are not intrinsically reflectional symmetry, or for eigenfunctions with repeating eigenvalues, as shown in Fig. 3. These are excluded from our training dataset. The dataset will be released to the community to facilitate future research.
4.4 Network Architecture
In our SignNet, the input placeholder is set to work with 4500 points, which are padded with 0 if the mesh has less than 4500 vertices, and for meshes with more than 4500 vertices, they are downsampled to 4500 points. In the network, we use multilayer perceptrons (MLP), maxpooling layers, and fully connected layers. There are five MLP layers, having 64, 128, 256, 512, 4096 channels respectively, and there are ReLU activation layers and batch normalization layers right after the output of each MLP layer. Then we use a maxpooling layer to aggregate the global features. Such a combination of sharedweight MLP and maxpooling layers are proven to be effective to fit functions defined on the point set (see the appendix in
[23]). Then, four fully connected layers are applied to the global features. Their output channels are 512, 128, 32, 2. The first three layers are also connected with ReLU activation, batch normalization, and (70%) dropout layers.4.5 Postprocessing
In most of the time, the meshes that we are processing are not perfectly intrinsically symmetric. The entries of functional matrices would not be exactly 1 and +1. Moreover, owing to the imperfect triangulation and discretization of Laplacian operator, in numerical computation, eigenvalues are mostly nonrepeating, but there are actually eigenfunction spaces with multiple eigenfunctions. Therefore, the entries associated with subeigenfunction spaces need more entries, usually in the form of an orthogonal submatrix, to describe the functional map.
5 Results and Evaluation
We first describe the implementation details of our method in Section 5.1. In Section 5.2 we compare our method with existing methods, both qualitatively and quantitatively. In addition to the accuracy of symmetry, we also measure the run time of different methods, showing the significant superiority of our method efficiently. We further test the robustness of our method in Section 5.4. Due to the sharedweight structure of our network, our method stays robust under different topology and vertex numbers.
5.1 Implementation Details
We now present details of the training and test process of our SignNet.
The computation of Laplacian matrix and eigenvectors are described in Section 3. Please refer to [16]
for more implementation details related to these steps. We implement the neural network architecture with Tensorflow. The network is optimized using Adam
[12] solver. The initial learning rate is set to and momentum is 0.9. We choose to truncate at first 12 lowest eigenvectors (i.e., ), and by default the input feature has 4 dimensions, composed of first 3 eigenvectors (i.e., ) and theth eigenvector. We train the network for 500 epochs on a PC with an NVIDIA 1080TI GPU and an intel i77700 CPU.
MT  BIM  OFM  GRS  FA  Ours  
Cat  66.0  93.7  90.0  96.5  95.6  96.0 
Centaur  92.0  100  96.0  92.0  100  100 
David  82.0  97.4  94.8  92.5  96.2  97.2 
Dog  91.0  100  93.2  97.4  98.8  100 
Horse  92.0  97.1  95.2  99.5  97.3  96.4 
Michael  87.0  98.9  94.6  91.4  96.5  98.7 
Victoria  83.0  98.3  98.7  95.5  96.2  97.8 
Wolf  100  100  100  100  100  100 
Gorilla    98.9  98.9  100  100  100 
Average  85.0  98.0  95.1  94.5  97.8  98.1 
5.2 Comparison of Results
As one of the biggest advantages of learningbased methods, our algorithm runs much faster than previous sampling based intrinsic symmetry detection algorithms. Also, the neural network can learn some common properties of eigenfunctions across models to distinguish the sign of eigenfunctions. This would avoid randomness of sampling, so also has better performance in terms of correspondence accuracy. In this section, we compare our method with stateoftheart methods including MT [10], BIM [11], OFM [14], GRS [30], and FA [20] on the following three metrics, widely used in the literature:

Correspondence rate: Assume that is a ground truth correspondence pair, and the algorithm’s prediction is . If the geodesic distance between and is less than the threshold , then we count this point as a correct matching. Correspondence rate measures the ratio of labeled points that are correctly matched.

Time: We measure the average run time of each algorithm to compute the symmetry.
We compare different methods using SCAPE [1] and TOSCA [5] datasets which contain intrinsically symmetric meshes, and the ground truth symmetric correspondences are from [2]. We also test our method on Handstand, Swing [29] and FAUST [3] datasets for qualitative evaluation, as no ground truth correspondences are available. As we mentioned in Section 4.3, our training set is independent from the test sets, to ensure fairness.
The results on the SCAPE dataset of deformed human shapes are reported in Table 1. As can be seen, our method achieves the best accuracy: improving the correspondence rate from the previous best (FA) to . Both our method and FA achieve mesh correct rate. In terms of runtime, our method is over 100 times faster than FA, and even more than other existing methods.
The results on the TOSCA dataset are reported in Tables 2 and 3 for the comparisons of correspondence rate and mesh rate, respectively. We report performance on individual object categories, and the overall average. Our method has similar improvements compared with existing methods. Some qualitative comparison is shown in Fig. 4.
MT  BIM  OFM  GRS  FA  Ours  

Cat  54.6  90.9  90.9  100  100  100 
Centaur  100  100  100  100  100  100 
David  57.1  100  100  100  100  100 
Dog  88.9  100  88.9  100  100  100 
Horse  100  100  87.5  100  100  100 
Michael  75  100  100  100  100  100 
Victoria  63.6  100  100  100  100  100 
Wolf  100  100  100  100  100  100 
Gorilla    100  100  100  100  100 
Average  76  98.7  92.6  100  100  100 
5.3 Evaluation of Design Choices
As we said before, by default we use the first three Laplacian eigenfunctions as the coordinates to embed vertices into an intrinsic space. Compared to Laplacian eigenfunctions, the raw positions are not invariant under global rigid transformation, nor under nonrigid isometric deformation, so not suitable for predicting the sign of eigenfunctions on the mesh. In this experiment, we compare using the positions versus eigenfunctions as input. Table 4 lists the average accuracy of sign prediction on TOSCA and SCAPE datasets. It shows that the accuracy using the position (denoted as Pos.) is much lower than that of our design. During experiments, we observe that when models have scales in a large range, the network with position input performs even worse.
We compute the functional map matrix by independently predicting the sign of eigenfunctions. This strategy circumvents the flip of signs and permutation of eigenfunctions. To show the advantage of this strategy, we design another network which takes all the eigenfunctions as input and predicts the whole diagonal entries at once. We denote this alternative design as Diag. in Table 4
. We can see the accuracy of sign prediction is much lower than ours. This is probably because the input and output dimensions are too high for the network to learn.
The input to the network is the first eigenfunctions as well as the th eigenfunction, i.e., . Too small the value would make different vertices indistinguishable, impossible to determine the sign. And if is too big, it would make the network more complex, and introduce more redundant noisy highfrequency eigenvectors. Here we vary from 2 to 4. The table shows that (Ours) achieves the best performance. Our input is defined on vertices. Although existing pointbased deep learning methods such as PointNet [23] take extrinsic point coordinates as input, it is possible to feed the same input to such architectures for prediction. We also test this by feeding our input directly to PointNet [23], and report the accuracy of sign prediction. The performance is also lower than that of our method. This is probably due to our compact network design that generalizes well to new data.
5.4 Robustness
We now test the robustness of our learningbased approach.
Different topology.
Since the geodesic distance and the eigenfunctions are defined on the manifold , the topology of would contribute significantly to the computation of intrinsic symmetry. For example, MT [10] requires the topology to be genuszero. In our method, since the eigenfunctions can work consistently under different topology, the network can stay robust with topological changes. As shown in Fig. 5, we reconstruct those meshes with selfintersection in space and the produced meshes are highgenus. The first row shows the original shapes with problematic regions highlighted. The second row shows the initial correspondences of intrinsic symmetry mapping, and the correspondences after refinement. For those challenging cases, intrinsic symmetry is no longer precisely satisfied, and the refinement is effective in improving detected symmetry.
Incomplete shapes.
Sometimes there could be missing data on shapes due to imperfect scanning or mesh modeling. We expect an intrinsic symmetry detection algorithm to work on such incomplete shapes. We perform a test by making some holes on the surface of the models. Fig. 6 shows the results of our method. It can be seen that the symmetry pairs on the shapes are still reasonable.
5.5 Failure Case
As shown by the statistics, our method works well in most of cases. However, due to the deterministic network structure, our method can only predict one symmetry result for a certain object, even if it has multiple intrinsic symmetries. In Fig. 7, we can see that, the table has more than one reflectional symmetry plane, while our method cannot predict all of them. It would be our future work to extend our method to predict the entire symmetry group endtoend.
5.6 More Qualitative Results
6 Conclusions and Future Work
In this paper, we presented a novel learningbased approach to intrinsic reflectional symmetry detection. Our method is based on functional maps and further develops a neural network architecture that predicts the sign of a Laplacian eigenfunction at a time. We design the network to take first few Laplacian eigenfunctions, in addition to the eigenfunction to be predicted. Extensive experiments show the realtime performance and superior accuracy compared with stateoftheart methods. We also performed experiments to validate design choices and robustness of our method in challenging cases.
This work addresses global intrinsic reflectional symmetry, which is most common in practice. As future work, it would be interesting to also include rotational symmetry detection, although the property of rotational symmetry functional map matrix is more complicated. Another possible direction is to extend this learningbased algorithm to partial symmetry detection.
References
 [1] (2005) SCAPE: shape completion and animation of people. In ACM transactions on graphics (TOG), Vol. 24, pp. 408–416. Cited by: §4.3, §5.2.
 [2] (2005) The correlated correspondence algorithm for unsupervised registration of nonrigid surfaces. In Advances in neural information processing systems, pp. 33–40. Cited by: §5.2.

[3]
(2014)
FAUST: dataset and evaluation for 3d mesh registration.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp. 3794–3801. Cited by: Figure 8, §5.2, §5.6.  [4] (2016) Learning shape correspondence with anisotropic convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 3189–3197. Cited by: §2.
 [5] (2008) Numerical geometry of nonrigid shapes. Springer Science & Business Media. Cited by: §4.3, Figure 8, §5.2.
 [6] (2017) Geometric deep learning: going beyond Euclidean data. IEEE Signal Processing Magazine 34 (4), pp. 18–42. Cited by: §2.
 [7] (2013) Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203. Cited by: §2.
 [8] (2017) Symmetryaware mesh segmentation into uniform overlapping patches. In Computer Graphics Forum, Vol. 36, pp. 95–107. Cited by: §1.
 [9] (2007) Shrec: shape retrieval contest: watertight models track. Online]: http://watertight. ge. imati. cnr. it 7. Cited by: Figure 2, §4.3, Figure 7.
 [10] (2010) Möbius transformations for global intrinsic symmetry analysis. In Computer Graphics Forum, Vol. 29, pp. 1689–1700. Cited by: §1, §1, §2, §5.2, §5.4, Table 1.
 [11] (2011) Blended intrinsic maps. In ACM Transactions on Graphics (TOG), Vol. 30, pp. 79. Cited by: §5.2, Table 1.
 [12] (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §5.1.
 [13] (2010) Symmetry factored embedding and distance. In ACM Transactions on Graphics (TOG), Vol. 29, pp. 103. Cited by: §2, §2.
 [14] (2015) Properly constrained orthonormal functional maps for intrinsic symmetries. Computers & Graphics 46, pp. 198–208. Cited by: §5.2, Table 1.
 [15] (2015) Geodesic convolutional neural networks on Riemannian manifolds. In Proceedings of the IEEE international conference on computer vision workshops, pp. 37–45. Cited by: §2.
 [16] Cited by: §3, §5.1.
 [17] (2007) Symmetrization. In ACM Transactions on Graphics (TOG), Vol. 26, pp. 63. Cited by: §2.
 [18] (2014) Structureaware shape processing. In ACM SIGGRAPH 2014 Courses, pp. 13. Cited by: §1.
 [19] (2010) Illustrating how mechanical assemblies work. Cited by: §1.
 [20] (2018) Fast and accurate intrinsic symmetry detection. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 417–434. Cited by: §1, §1, §1, §2, §4.2, §4.5, Figure 4, item 2, §5.2, Table 1.
 [21] (2012) Functional maps: a flexible representation of maps between shapes. ACM Transactions on Graphics (TOG) 31 (4), pp. 30. Cited by: §3, §3, §3, §4.5.
 [22] (2008) Global intrinsic symmetries of shapes. In Computer graphics forum, Vol. 27, pp. 1341–1348. Cited by: §1, §1, §2, §2, §4.1.
 [23] (2017) PointNet: deep learning on point sets for 3d classification and segmentation. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE 1 (2), pp. 4. Cited by: §4.4, §5.3.
 [24] (2019) Functional maps representation on product manifolds. Comput. Graph. Forum 38 (1), pp. 678–689. External Links: Link, Document Cited by: §2.
 [25] (2007) Laplacebeltrami eigenfunctions for deformation invariant shape representation. In Proceedings of the fifth Eurographics symposium on Geometry processing, pp. 225–233. Cited by: §2.
 [26] (2016) A symmetry prior for convex variational 3d reconstruction. In European Conference on Computer Vision, pp. 313–328. Cited by: §1.
 [27] (2004) Deformation transfer for triangle meshes. ACM Transactions on graphics (TOG) 23 (3), pp. 399–405. Cited by: Figure 2, §4.3.
 [28] (2014) Relating shapes via geometric symmetries and regularities. ACM Transactions on Graphics (TOG) 33 (4), pp. 119. Cited by: §1.
 [29] (2008) Articulated mesh animation from multiview silhouettes. 27 (3), pp. 97. Cited by: §5.2.
 [30] (2017) Group representation of global intrinsic symmetries. In Computer Graphics Forum, Vol. 36, pp. 51–61. Cited by: §1, §1, §2, Figure 4, item 2, §5.2, Table 1.
 [31] (2012) Multiscale partial intrinsic symmetry detection. ACM Transactions on Graphics (TOG) 31 (6), pp. 181. Cited by: §2.
 [32] (2009) Partial intrinsic reflectional symmetry of 3d shapes. ACM Transactions on Graphics (TOG) 28 (5), pp. 138. Cited by: §2.
 [33] (2017) SyncSpecCNN: synchronized spectral CNN for 3d shape segmentation.. In CVPR, pp. 6584–6592. Cited by: §2.
Comments
There are no comments yet.