What is it indeed that gives us the feeling of elegance in a solution, in a demonstration? It is the harmony of the diverse parts, their symmetry, their happy balance.
Symmetry is a common pattern that appears ubiquitously in the world. Majority of living things, including humans, animals, and plants (e.g. flowers) have some form of symmetry. It is also a widely employed design principle in man-made objects, including buildings, furniture, vehicles, to name a few.
Due to its wide applicability, symmetry information has been exploited in computer graphics to help address many geometry processing tasks, including shape matching , segmentation , editing , completion , and understanding . In these application systems, symmetry detection is usually an integral component, so efficient symmetry detection, especially achieving real-time efficiency, has significant benefits, e.g., to avoid impeding real-time performance in 3D acquisition/reconstruction, and for improved user experience in interactive shape editing by reducing users’ waiting time.
In geometry processing, researchers mainly focus on symmetry in the spatial domain, including extrinsic symmetry defined in Euclidean space or intrinsic symmetry defined in non-Euclidean (manifold) space. Extrinsic symmetry refers to shape invariance w.r.t. rigid (including reflectional) transformations. Compared to extrinsic symmetry, intrinsic symmetry is more difficult to detect due to its much larger solution space, as discussed in previous work [30, 10, 22].
Given a shape model, intrinsic symmetry detection aims to estimate a self-homeomorphism on the manifold that preserves the geodesic distance between each point pair. Usually, the manifold is discretized as a triangle mesh, and algorithms predict a point-wise correspondence matrix to represent symmetric pairs.State-of-the-art methods for intrinsic symmetry detection is largely based on embedding the symmetry space to some lower dimensional spaces, such as Möbius transformation space , Global Point Signature (GPS) space , or functional map space [30, 20] and performing random sampling or voting, which suffers from high computational cost and uncertainty of results due to randomness.
Despite great effort, efficient and robust detection of intrinsic symmetry remains challenging. Existing state-of-the-art methods typically take several seconds or longer to analyze one shape , and may produce unreliable results for difficult cases. To address this, we design the first learning-based intrinsic symmetry detection method to handle the intrinsic symmetry problem. Like most existing works, we focus on intrinsic reflectional symmetry as it is most common in the real world. Learning intrinsic symmetry directly on meshes is challenging, due to their irregular connectivity, and the global nature of symmetry. We simplify this problem when designing the deep neural network, such that it does not directly process the edges and faces of the mesh, but instead takes intrinsic features as input.
Similar to 
, given an input mesh, the symmetry mapping defined on it can be represented using a functional map, or equivalently using a functional map matrix. Laplace-Beltrami eigenfunctions can be extracted to provide a basis for analysis. In the matrix, entries corresponding to eigenfunctions associated with non-repeating eigenvalues are determined by the sign (odd or even) of the eigenfunction after the symmetry mapping is applied. State-of-the-art work determines the sign of the eigenfunction through random sampling. Although it is faster than previous methods, it is still slow (requiring several seconds for a typical mesh), and may not be sufficiently robust.
To address this, we train a deep neural network to predict the sign of each eigenfunction. We design SignNet, a deep neural network for sign prediction, that in addition to the eigenfunction to be predicted, also takes the first few Laplacian eigenfunctions as input, which effectively encode intrinsic descriptions of the mesh characteristics, while avoiding coping with mesh connectivity explicitly. To make the computation more efficient, we truncate Laplace-Beltrami eigenfunctions in the spectral domain to lower the dimension of the representation. After predicting the entries of the functional map matrix, we apply a post-processing to further fine-tune the results (addressing issues such as near-identical eigenvalues and slight non-isometry) and convert the functional map to one-to-one point correspondence.
The main contributions of this work are summarized as follows:
We propose the first learning-based method to detect global intrinsic reflectional symmetry of shapes. Compared to previous works, our method achieves real-time performance, much more efficient than state-of-the-art (over 100 times faster). Our method also achieves higher accuracy, and is more robust.
The intrinsic symmetry problem is formulated using a functional map. To compute the entries of the functional map matrix, we design a novel deep neural network to determine the sign of each eigenfunction. Our network that predicts the sign of individual eigenfunctions using intrinsic features is compact and generalizes well to new shapes.
2 Related Work
Intrinsic Symmetry Detection.
Many previous works cast their attention in intrinsic symmetry detection tasks. Ovsjanikov et al.  formulate the concept of intrinsic symmetry. They propose to use the Global Point Signature (GPS)  to transform the intrinsic symmetry of shapes into the Euclidean symmetry in the signature embedding space. The symmetry is detected by first deciding the sign sequence of eigenfunctions and then finding the nearest neighbors of the GPS of points. Xu et al. [32, 31] extend the concept of intrinsic symmetry and introduce partial symmetry where some parts of an object are symmetric. In this paper, we focus on global intrinsic symmetry due to its wide applicability, as most research in this area does.
To address the large solution space, some works parametrize intrinsic symmetry to some lower dimensional space. A highly related problem is investigated by Mitra et al.  who propose a method to symmetrize imperfectly symmetric objects. They find intrinsically symmetric point pairs by voting, and then parametrize possible transformations in a canonical space and optimize the transformation to align symmetric pairs. Kim et al.  use another parametrization of symmetry transformations. They find a set of symmetric points by detecting critical points of the Average Geodesic Distance (AGD) function, and generate candidate anti-Möbius transformations that can describe the symmetric transformation by enumerating subsets of the points. As a voting-based method, the running-time could be an issue. Also, the use of anti-Möbius transformation limits the method to handle genus-zero manifolds. Lipman et al.  detect symmetry by finding the orbit of points under symmetric transformations. A fuzzy point-wise symmetry correspondence matrix is generated randomly, based on which they further compute a Symmetry Factored Embedding (SFE) and Symmetry Factored Distance (SFD). However, the computation of the correspondence matrix is very time-consuming.
The relationship between symmetry groups and matrices is studied in . Similarly, Wang et al.  establish a homeomorphism between the symmetry group and the multiplication group of matrices. They introduce the functional map to parametrize the symmetry and limit the search space of matrix entries to the subspace of eigenfunctions. However, due to the noise in manifolds and errors during numerical calculation, eigenvalues which are ideally identical are usually calculated as different values in practice, making it difficult to determine true subspaces and resulting in poor symmetry detection. As described in  the continuity and sparsity make functional maps a suitable representation for correspondence problems, including intrinsic symmetry. Functional maps are also used in the work . As also mentioned in , eigenfunctions are invariant under self-isometry, apart from sign ambiguity, and the diagonal entries of the functional map matrix are related to the sign of corresponding eigenfunctions. To decide the signs, landmark symmetric point pairs and the geodesic lines connecting them are selected. Nagar and Raman  design an explicit solution to this problem, but since their method depends on the landmark pairs, the random sampling requires a trade-off between robustness and computation complexity. Compared to state-of-the-art methods, our learning-based method avoids explicit sampling and is much faster (over 100 times faster for a typical example), achieving real-time performance. It circumvents the randomness of sampling, and is thus more robust and accurate.
Shape Analysis with Deep Learning.
Our method learns the properties of eigenfunctions defined on manifolds using deep neural networks. We review research that defines neural networks on 3D shapes. With increasing requirements of faster and better analysis of 3D geometry, recent works exploit learning on shapes with deep learning. Boscaini et al.
design an anisotropic convolutional neural network to learn correspondences across shapes. Masci et al. also design a network in the spatial domain.
Alternatively, another category of work constructs neural networks in the spectral domain. Bruna et al.  introduce a spectral convolutional layer on graphs, which can be viewed as a general form of meshes. As described in , a fundamental problem of spectral convolution is its dependency on the basis, making it difficult to be generalized to different domains. To mitigate this, Yi et al.  propose a network architecture to synchronize the spectral domains and then perform convolutional operations on it. Rodolà et al.  design a fully connected network to learn features that can generate functional map matrices; however, fully connected networks may suffer from overfitting, and their method requires point-wise correspondences to train the model which is not required by our method.
In this paper, we aim to detect intrinsic symmetries for general shapes, where the topology and triangulation may vary significantly. We therefore prefer a network architecture that can run robustly in cross-domain settings. To circumvent irregular connectivity of meshes, we take as input intrinsic geometric features defined on mesh vertices, namely Laplacian eigenfunctions, which implicitly carry connectivity information, but avoid coping with complex mesh connectivity. Our method thus handles general mesh topology and has good generalizability.
3 Representing Intrinsic Symmetry by Functional Maps
To cope with discrete and high-dimensional point-wise correspondence matrices, we use functional maps to represent the self-mapping. The functional map was introduced in , first used to describe the correspondences between two shapes. In our problem, a self-isometry can also be viewed as a mapping between two identical shapes . And the mapping can naturally introduce a bijective transformation in the square-integrable space , such that
Three remarks are presented in  w.r.t. functional map , which are summarized below.
The original self-isometry can be recovered from .
For each intrinsic symmetry on , is a linear transformation on
is a linear transformation on.
Assume that is equipped with an orthogonal basis . For each , the functional map can be represented by a matrix , with entries . For each function with coefficient vector , the coefficient vector of map of is .
where contains vertex weights, with equal to the Voronoi area of the vertex (i.e., a third of the sum of one-ring neighborhood areas). is the sparse cotangent weight matrix, is the degree matrix which is a diagonal matrix with diagonal entries .
The aforementioned eigenfunction basis are the solution of , where is a diagonal matrix whose diagonal entries are eigenvalues in ascending order, . For efficiency and robustness, we take the eigenfunctions corresponding to the first smallest eigenvalues (). Note and the corresponding trivial eigenfunction is ignored.
Our goal is to detect the intrinsic symmetry of shapes. An intrinsic symmetry is the self-homeomorphism of a smooth surface , written as , which preserves geodesic distances
Instead of directly computing a point-wise correspondence matrix, we use a functional map to describe this self-mapping. The functional map defined on the Laplacian basis is represented as a matrix, which is the coordinate transformation matrix w.r.t. the source and target bases. Since the Laplace-Beltrami operator is invariant under isometric transformation, the eigenfunction space stays invariant under self-mapping. Therefore, the matrix corresponding to the self-mapping is a block diagonal matrix. More specifically, only one of the two cases holds for eigenfunction associated with non-repeating eigenvalues (see also in ):
, where is called positive.
, where is called negative.
Therefore, the entry in the matrix corresponding to each non-repeating eigenfunction should be either +1 or -1, depending on whether is positive or negative. Fig. 1 shows the pipeline of our method. We train a network called SignNet to distinguish the sign of eigenfunctions under reflectional symmetry. To provide sufficient guidance, we train the network in a supervised fashion. Given an input shape, once the signs of Laplacian eigenfunctions are predicted using our SignNet, we can fill in the diagonal of the initial functional map matrix with +1 and -1. However, in most of the time the intrinsic symmetry is imperfect, where some areas experience non-isometric deformation. Moreover, there could also be eigenfunction spaces associated with repeating eigenvalues, in which condition the diagonal matrix cannot fully express the mapping. Therefore we use a postprocessing step to fine-tune the initial matrix to obtain the final matrix .
4.2 Learning Intrinsic Symmetry
Diagonal entries of the functional map matrix.
As described in Section 3, we detect intrinsic symmetry by computing the functional map matrix . Although the dimension of functional map matrix is already much lower than the point-wise correspondence matrix, predicting the full matrix is still challenging for optimization methods or deep networks since there are still too many variables. We further utilize the sparse structure of the symmetry functional map to make it much easier to predict the mapping.
First we need to clarify an important property about eigenfunctions under a symmetry mapping. The Laplacian eigenfunctions associated with non-repeating eigenvalues are invariant under intrinsic symmetry mapping, only with sign ambiguity. This property is formally presented as follows:
For an intrinsic mapping defined in Equation (3) and a Laplacian eigenfunction associated with a non-repeating eigenvalue , .
As a well-known property of Laplace-Beltrami operator, the operator is invariant under isometric transformation , i.e.
where is the transformation on introduced by .
Let , we can then obtain , which means is also an eigenfunction with as its eigenvalue. We have , so is in the same eigenfunction space as . Given that is non-repeating and is isometric, then we have . ∎
From the proof of Theorem 1, we can know that and are in the same eigenfunction space. In particular, if this eigenvalue is non-repeating, we can denote , where .
Based on this property of eigenfunctions, we further exploit the relationship between and the functional map matrix .
If all eigenvalues are non-repeating, then , if , or otherwise.
. If , as defined in Theorem 1, ; if , since , then . ∎
Theorem 2 shows that is a block diagonal matrix, where the entry associated with the non-repeating eigenvalue is .
Predicting the sign of eigenfunctions.
So the problem is much simplified and disentangled, such that we can derive the whole matrix by separately considering the sign of each eigenfunction. The visualization of the eigenfunctions on shapes is shown in Fig. 2. In this illustration, red areas represent positive values and blue areas are negative values. The first row shows eigenfunctions that satisfy (i.e. positive cases), and the second row includes shapes associated with a negative eigenfunction. From the figure it can be seen that symmetric patterns are rather obvious: positive functions appear symmetric under reflectional symmetry, while negative ones are asymmetric. Nagar and Raman  propose a sampling-based method to decide the sign of the function. However, this approach depends on random samples, which takes a long time to compute and may occasionally fail. In this paper, we propose to train a neural network to learn the sign of eigenfunctions.
Fig. 1 illustrates the pipeline of our method. Given an input shape, we first compute its Laplacian matrix and the first eigenfunctions (excluding the trivial eigenfunction associated with eigenvalue ). Instead of taking the whole shape along with the eigenfunctions as input, which requires the neural network to deal with irregular mesh connectivity, our neural network (SignNet) processes each eigenfunction separately. Assuming the -th eigenfunction is being processed, the input to the network includes not only , but also the first eigenfunctions , which capture the characteristics of the input mesh and are also intrinsic.
The output of SignNet is a 2-dimensional softmax vector. The distributions of the eigenfunctions on the mesh can reflect the pattern of the sign to a great extent. Here we do not use the original positions of vertices as input since they are extrinsic features. In contrast, the first dimensions of Laplacian eigenvectors are intrinsic, thus more suitable for detecting intrinsic symmetry.
To visualize this, in Fig. 1(b), we plot the embedding of vertices taking the first three eigenfunctions evaluated at vertex as vertex coordinates and as the color (blue to red means small value to large value). It can be observed that the shapes of the embedding are extrinsically symmetric even if the mesh is only intrinsically symmetric. Also, we can see that those eigenfunctions are either symmetric or asymmetric, corresponding to positive or negative eigenfunctions.
In the SignNet neural network, we use Multi-Layer Perceptrons (MLPs) to extract vertex features with increasing complexity. Then a max-pooling is applied on all vertices to aggregate global features. Following the pooling layers are several fully-connected layers with decreasing numbers of channels. In the end, the network predicts a two-dimensional score vector, i.e.
such that the sign is predicted to be negative if , or positive if , for . Let be a two-dimensional vector, if , and if
. The loss function is designed as theCorssEntropy between and ground-truth sign label , formulated as
4.3 Training Data
Our learning-based approach requires a dataset for training. For this purpose, we choose as training set a fusion of sets SHREC 2007 , elephant, and flamingo , which contains non-rigidly deformed shapes which are intrinsically symmetric. As a shape retrieval dataset, SHREC dataset includes shapes of different categories. Meanwhile, they are also independent from the test sets (SCAPE  and TOSCA ). This ensures fairness and evaluates the generalizability of our learning-based approach.
We built a simple user interface to visualize and manually label each Laplacian eigenfunction as either positive, negative or neither, under reflectional symmetry transform. Neither cases happen for shapes which are not intrinsically reflectional symmetry, or for eigenfunctions with repeating eigenvalues, as shown in Fig. 3. These are excluded from our training dataset. The dataset will be released to the community to facilitate future research.
4.4 Network Architecture
In our SignNet, the input placeholder is set to work with 4500 points, which are padded with 0 if the mesh has less than 4500 vertices, and for meshes with more than 4500 vertices, they are downsampled to 4500 points. In the network, we use multi-layer perceptrons (MLP), max-pooling layers, and fully connected layers. There are five MLP layers, having 64, 128, 256, 512, 4096 channels respectively, and there are ReLU activation layers and batch normalization layers right after the output of each MLP layer. Then we use a max-pooling layer to aggregate the global features. Such a combination of shared-weight MLP and max-pooling layers are proven to be effective to fit functions defined on the point set (see the appendix in). Then, four fully connected layers are applied to the global features. Their output channels are 512, 128, 32, 2. The first three layers are also connected with ReLU activation, batch normalization, and (70%) dropout layers.
In most of the time, the meshes that we are processing are not perfectly intrinsically symmetric. The entries of functional matrices would not be exactly -1 and +1. Moreover, owing to the imperfect triangulation and discretization of Laplacian operator, in numerical computation, eigenvalues are mostly non-repeating, but there are actually eigenfunction spaces with multiple eigenfunctions. Therefore, the entries associated with sub-eigenfunction spaces need more entries, usually in the form of an orthogonal sub-matrix, to describe the functional map.
5 Results and Evaluation
We first describe the implementation details of our method in Section 5.1. In Section 5.2 we compare our method with existing methods, both qualitatively and quantitatively. In addition to the accuracy of symmetry, we also measure the run time of different methods, showing the significant superiority of our method efficiently. We further test the robustness of our method in Section 5.4. Due to the shared-weight structure of our network, our method stays robust under different topology and vertex numbers.
5.1 Implementation Details
We now present details of the training and test process of our SignNet.
for more implementation details related to these steps. We implement the neural network architecture with Tensorflow. The network is optimized using Adam solver. The initial learning rate is set to and momentum is 0.9. We choose to truncate at first 12 lowest eigenvectors (i.e., ), and by default the input feature has 4 dimensions, composed of first 3 eigenvectors (i.e., ) and the
-th eigenvector. We train the network for 500 epochs on a PC with an NVIDIA 1080TI GPU and an intel i7-7700 CPU.
5.2 Comparison of Results
As one of the biggest advantages of learning-based methods, our algorithm runs much faster than previous sampling based intrinsic symmetry detection algorithms. Also, the neural network can learn some common properties of eigenfunctions across models to distinguish the sign of eigenfunctions. This would avoid randomness of sampling, so also has better performance in terms of correspondence accuracy. In this section, we compare our method with state-of-the-art methods including MT , BIM , OFM , GRS , and FA  on the following three metrics, widely used in the literature:
Correspondence rate: Assume that is a ground truth correspondence pair, and the algorithm’s prediction is . If the geodesic distance between and is less than the threshold , then we count this point as a correct matching. Correspondence rate measures the ratio of labeled points that are correctly matched.
Time: We measure the average run time of each algorithm to compute the symmetry.
We compare different methods using SCAPE  and TOSCA  datasets which contain intrinsically symmetric meshes, and the ground truth symmetric correspondences are from . We also test our method on Handstand, Swing  and FAUST  datasets for qualitative evaluation, as no ground truth correspondences are available. As we mentioned in Section 4.3, our training set is independent from the test sets, to ensure fairness.
The results on the SCAPE dataset of deformed human shapes are reported in Table 1. As can be seen, our method achieves the best accuracy: improving the correspondence rate from the previous best (FA) to . Both our method and FA achieve mesh correct rate. In terms of runtime, our method is over 100 times faster than FA, and even more than other existing methods.
The results on the TOSCA dataset are reported in Tables 2 and 3 for the comparisons of correspondence rate and mesh rate, respectively. We report performance on individual object categories, and the overall average. Our method has similar improvements compared with existing methods. Some qualitative comparison is shown in Fig. 4.
5.3 Evaluation of Design Choices
As we said before, by default we use the first three Laplacian eigenfunctions as the coordinates to embed vertices into an intrinsic space. Compared to Laplacian eigenfunctions, the raw positions are not invariant under global rigid transformation, nor under non-rigid isometric deformation, so not suitable for predicting the sign of eigenfunctions on the mesh. In this experiment, we compare using the positions versus eigenfunctions as input. Table 4 lists the average accuracy of sign prediction on TOSCA and SCAPE datasets. It shows that the accuracy using the position (denoted as Pos.) is much lower than that of our design. During experiments, we observe that when models have scales in a large range, the network with position input performs even worse.
We compute the functional map matrix by independently predicting the sign of eigenfunctions. This strategy circumvents the flip of signs and permutation of eigenfunctions. To show the advantage of this strategy, we design another network which takes all the eigenfunctions as input and predicts the whole diagonal entries at once. We denote this alternative design as Diag. in Table 4
. We can see the accuracy of sign prediction is much lower than ours. This is probably because the input and output dimensions are too high for the network to learn.
The input to the network is the first eigenfunctions as well as the -th eigenfunction, i.e., . Too small the value would make different vertices indistinguishable, impossible to determine the sign. And if is too big, it would make the network more complex, and introduce more redundant noisy high-frequency eigenvectors. Here we vary from 2 to 4. The table shows that (Ours) achieves the best performance. Our input is defined on vertices. Although existing point-based deep learning methods such as PointNet  take extrinsic point coordinates as input, it is possible to feed the same input to such architectures for prediction. We also test this by feeding our input directly to PointNet , and report the accuracy of sign prediction. The performance is also lower than that of our method. This is probably due to our compact network design that generalizes well to new data.
We now test the robustness of our learning-based approach.
Since the geodesic distance and the eigenfunctions are defined on the manifold , the topology of would contribute significantly to the computation of intrinsic symmetry. For example, MT  requires the topology to be genus-zero. In our method, since the eigenfunctions can work consistently under different topology, the network can stay robust with topological changes. As shown in Fig. 5, we reconstruct those meshes with self-intersection in space and the produced meshes are high-genus. The first row shows the original shapes with problematic regions highlighted. The second row shows the initial correspondences of intrinsic symmetry mapping, and the correspondences after refinement. For those challenging cases, intrinsic symmetry is no longer precisely satisfied, and the refinement is effective in improving detected symmetry.
Sometimes there could be missing data on shapes due to imperfect scanning or mesh modeling. We expect an intrinsic symmetry detection algorithm to work on such incomplete shapes. We perform a test by making some holes on the surface of the models. Fig. 6 shows the results of our method. It can be seen that the symmetry pairs on the shapes are still reasonable.
5.5 Failure Case
As shown by the statistics, our method works well in most of cases. However, due to the deterministic network structure, our method can only predict one symmetry result for a certain object, even if it has multiple intrinsic symmetries. In Fig. 7, we can see that, the table has more than one reflectional symmetry plane, while our method cannot predict all of them. It would be our future work to extend our method to predict the entire symmetry group end-to-end.
5.6 More Qualitative Results
6 Conclusions and Future Work
In this paper, we presented a novel learning-based approach to intrinsic reflectional symmetry detection. Our method is based on functional maps and further develops a neural network architecture that predicts the sign of a Laplacian eigenfunction at a time. We design the network to take first few Laplacian eigenfunctions, in addition to the eigenfunction to be predicted. Extensive experiments show the real-time performance and superior accuracy compared with state-of-the-art methods. We also performed experiments to validate design choices and robustness of our method in challenging cases.
This work addresses global intrinsic reflectional symmetry, which is most common in practice. As future work, it would be interesting to also include rotational symmetry detection, although the property of rotational symmetry functional map matrix is more complicated. Another possible direction is to extend this learning-based algorithm to partial symmetry detection.
-  (2005) SCAPE: shape completion and animation of people. In ACM transactions on graphics (TOG), Vol. 24, pp. 408–416. Cited by: §4.3, §5.2.
-  (2005) The correlated correspondence algorithm for unsupervised registration of nonrigid surfaces. In Advances in neural information processing systems, pp. 33–40. Cited by: §5.2.
-  (2014) FAUST: dataset and evaluation for 3d mesh registration. In , pp. 3794–3801. Cited by: Figure 8, §5.2, §5.6.
-  (2016) Learning shape correspondence with anisotropic convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 3189–3197. Cited by: §2.
-  (2008) Numerical geometry of non-rigid shapes. Springer Science & Business Media. Cited by: §4.3, Figure 8, §5.2.
-  (2017) Geometric deep learning: going beyond Euclidean data. IEEE Signal Processing Magazine 34 (4), pp. 18–42. Cited by: §2.
-  (2013) Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203. Cited by: §2.
-  (2017) Symmetry-aware mesh segmentation into uniform overlapping patches. In Computer Graphics Forum, Vol. 36, pp. 95–107. Cited by: §1.
-  (2007) Shrec: shape retrieval contest: watertight models track. Online]: http://watertight. ge. imati. cnr. it 7. Cited by: Figure 2, §4.3, Figure 7.
-  (2010) Möbius transformations for global intrinsic symmetry analysis. In Computer Graphics Forum, Vol. 29, pp. 1689–1700. Cited by: §1, §1, §2, §5.2, §5.4, Table 1.
-  (2011) Blended intrinsic maps. In ACM Transactions on Graphics (TOG), Vol. 30, pp. 79. Cited by: §5.2, Table 1.
-  (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §5.1.
-  (2010) Symmetry factored embedding and distance. In ACM Transactions on Graphics (TOG), Vol. 29, pp. 103. Cited by: §2, §2.
-  (2015) Properly constrained orthonormal functional maps for intrinsic symmetries. Computers & Graphics 46, pp. 198–208. Cited by: §5.2, Table 1.
-  (2015) Geodesic convolutional neural networks on Riemannian manifolds. In Proceedings of the IEEE international conference on computer vision workshops, pp. 37–45. Cited by: §2.
-  Cited by: §3, §5.1.
-  (2007) Symmetrization. In ACM Transactions on Graphics (TOG), Vol. 26, pp. 63. Cited by: §2.
-  (2014) Structure-aware shape processing. In ACM SIGGRAPH 2014 Courses, pp. 13. Cited by: §1.
-  (2010) Illustrating how mechanical assemblies work. Cited by: §1.
-  (2018) Fast and accurate intrinsic symmetry detection. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 417–434. Cited by: §1, §1, §1, §2, §4.2, §4.5, Figure 4, item 2, §5.2, Table 1.
-  (2012) Functional maps: a flexible representation of maps between shapes. ACM Transactions on Graphics (TOG) 31 (4), pp. 30. Cited by: §3, §3, §3, §4.5.
-  (2008) Global intrinsic symmetries of shapes. In Computer graphics forum, Vol. 27, pp. 1341–1348. Cited by: §1, §1, §2, §2, §4.1.
-  (2017) PointNet: deep learning on point sets for 3d classification and segmentation. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE 1 (2), pp. 4. Cited by: §4.4, §5.3.
-  (2019) Functional maps representation on product manifolds. Comput. Graph. Forum 38 (1), pp. 678–689. External Links: Cited by: §2.
-  (2007) Laplace-beltrami eigenfunctions for deformation invariant shape representation. In Proceedings of the fifth Eurographics symposium on Geometry processing, pp. 225–233. Cited by: §2.
-  (2016) A symmetry prior for convex variational 3d reconstruction. In European Conference on Computer Vision, pp. 313–328. Cited by: §1.
-  (2004) Deformation transfer for triangle meshes. ACM Transactions on graphics (TOG) 23 (3), pp. 399–405. Cited by: Figure 2, §4.3.
-  (2014) Relating shapes via geometric symmetries and regularities. ACM Transactions on Graphics (TOG) 33 (4), pp. 119. Cited by: §1.
-  (2008) Articulated mesh animation from multi-view silhouettes. 27 (3), pp. 97. Cited by: §5.2.
-  (2017) Group representation of global intrinsic symmetries. In Computer Graphics Forum, Vol. 36, pp. 51–61. Cited by: §1, §1, §2, Figure 4, item 2, §5.2, Table 1.
-  (2012) Multi-scale partial intrinsic symmetry detection. ACM Transactions on Graphics (TOG) 31 (6), pp. 181. Cited by: §2.
-  (2009) Partial intrinsic reflectional symmetry of 3d shapes. ACM Transactions on Graphics (TOG) 28 (5), pp. 138. Cited by: §2.
-  (2017) SyncSpecCNN: synchronized spectral CNN for 3d shape segmentation.. In CVPR, pp. 6584–6592. Cited by: §2.