1 Introduction
Correspondence problems are, without doubt, among the most important problems that have to be dealt when analyzing highlycomplex geometric data. Establishing correspondence between images is a cornerstone of many algorithms in computer vision and image analysis
[50]. In geometry processing and graphics, finding intrinsic correspondence between deformable 3D shapes is one of the fundamental problems with a plethora of applications ranging from texture mapping to animation [16]. In machine learning applications, correspondence problems in highdimensional spaces arise in multimodal data analysis problems such as manifold alignment and multiview clustering
[49, 14].Related work
A wide class of correspondence methods aims at minimizing some structure distortion, which can include similarity of local features [32, 9, 2, 51, 24], geodesic [27, 7] or diffusion distances [12, 8], or a combination thereof [45]. Zeng et al. [53] used higherorder structures. Typically, the computational complexity of such methods is high, and there have been several attempts to alleviate the computational complexity using hierarchical [38] or smart sampling [44] methods. Several approaches formulate the correspondence problem as an NPhard quadratic assignment and propose different relaxations thereof [46, 21, 36]
. A recent trend is to use machine learning techniques, such as random forests, to learn correspondences
[39, 37].Another important class of methods strives to embed the intrinsic structure of the manifold into some other space, where the correspondence can be parametrized with a small number of degrees of freedom. Elad and Kimmel
[13] used multidimensional scaling to embed the geodesic metric of the matched manifolds into the Euclidean space, where alignment of the resulting “canonical forms” is then performed by simple rigid matching (ICP) [10, 3, 29]. Mateus et al. [25]used the first eigenfunctions of the LaplaceBeltrami operator as embedding coordinates. A more recent method of spectral kernel map
[40] follows this path. Lipman et al. [23, 18, 19] used conformal embeddings to parametrize correspondence as a Möbius transformation.More recently, there is an emerging interest in soft correspondence approaches. Several methods formulated soft correspondence as a masstransportation problem [26, 42]. Ovsjanikov et al. [31] introduced the functional correspondence framework, modeling the correspondence as a linear operator between spaces of functions on two manifolds, which has an efficient representation in the Laplacian eigenbases. In such a formulation, the problem of finding correspondence is reduced to solving a linear system of equations given some known set of corresponding functions on the two manifolds. Pokrass et al. [35] extended this approach to the case when the ordering of the corresponding functions is unknown, and introduced a regularization on the diagonal structure of the correspondence matrix representation in the Laplacian eigenbases. Kovnatsky et al. [20] proposed constructing coupled bases by simultaneous diagonalization of Laplacians. In such bases the correspondence is represented by an approximately diagonal matrix, which allows to solve the linear system of equations for diagonal elements only.
Main contribution
Our paper deals with the problem of finding dense intrinsic correspondence between manifolds in the functional formulation. We propose treating functional correspondence as geometric matrix completion. Our approach is inspired by the recent work on recovery of matrices on graphs [17]
, which introduced geometric structure into the matrix completion problem. Treating the rows and columns of the functional correspondence matrix as vectorvalued functions on the respective manifolds, we introduce geometric structure using the Dirichlet energy. We show that our method includes the previous approaches of
[31, 20] as particular settings, and compares favorably to stateoftheart methods for nonrigid shape correspondence on the challenging Princeton benchmark [19]. The advantage of our method is especially pronounced in the scarce data settings. The proposed method is completely generic and can be applied to any manifolds and highdimensional geometric data, such as visual data and annotated images.2 Background
Notation
In this paper, we use bold capital letters to denote matrices, bold lowercase letters for vectors, and italic lowercase letters for scalars. Given a matrix , we denote by its Frobenius norm, by its norm, and by its nuclear (or trace) norm, where
denote the singular values of
. Alternatively, provided a decomposition of the form , the nuclear norm can be written as [43]. We denote by the Kronecker delta, i.e. a unit vector with one at position and all the rest zeros.Laplacians
Let us be given a Riemannian manifold sampled at points . The local structure of the manifold is modeled as an undirected graph in case of abstract data, or as a simplicial complex (triangular mesh) in case of 3D shapes, where and denote the edges and faces, respectively. We denote by the space of real functions on the discrete manifold.
We define the Laplacian of as , where , and and are some edge and vertex weights, respectively. The definition of the weights depends on the particular discretization of the manifold. In this paper, we consider two types of objects (see Figure 1):
Graphs, for which we use the Gaussian edge weights
(1) 
and construct the edge set using nearest neighbors. The vertex weight is defined as (random walk Laplacian) or (unnormalized Laplacian) [48]. 3D shapes represented as point clouds are treated in this way.
Triangular meshes, for which we use the discretization of the LaplaceBeltrami operator [22] based on the classical cotangent weights [34, 28]
(2) 
where are the angles in from of the edge . The vertex weights are defined as local area elements , equal to one third of the sum of the onering triangles areas (marked in green in Figure 1).
Harmonic analysis on manifolds
The eigenvectors
of the Laplacian satisfying (hereare the corresponding eigenvalues and
) form an orthonormal basis that generalizes the Fourier basis to nonEuclidean spaces. Given a function , its Fourier coefficients are computed as . Taking the first coefficients, one obtains a smooth (“lowpass”) approximation of the function , where .Compressed manifold modes
The Laplacian eigenbasis can be found by minimizing the Dirichlet energy,
(3) 
Neumann et al. [30], following Ozoliņš et al. [33], proposed constructing localized approximate eigenbases on manifolds (referred to as compressed manifold modes or CMM) by imposing the penalty on , which, in combination with the Dirichlet energy makes the basis functions localized,
(4) 
for .
Functional maps
Let and denote two manifolds sampled at and points, respectively. Ovsjanikov et al. [31] propose to model the functional correspondence between the spaces and as the matrix , which maps a function into . Traditional pointwise correspondence is a particular case of this model wherein maps delta functions into delta functions.
Let and be the Laplacians of and , and let and be the respective truncated Laplacian eigenbases. Let us be given corresponding functions and satisfying , where is the unknown groundtruth correspondence. The functional correspondence can be approximated in these bases as a rank matrix , where is a matrix translating Fourier coefficients from the basis to the basis .
The matrix can be found by solving the system of equations in the leastsquares sense,
(5) 
Note that since the groundtruth correspondence is unknown, in practice the matrices must be computed independently on and such that . In the simplest case, represent some known pointwise correspondences (“seeds”) between points and . In shape correspondence applications, are typically computed using some intrinsic shape descriptor such as HKS [32], WKS [2], MeshHOG [51], ShapeMSER [24], which is invariant to shape deformations.
For applications requiring pointwise maps, Ovsjanikov et al. [31] devise an iterative conversion scheme similar to iterative closest point (ICP) methods [10, 3, 29]. Thinking of and as dimensional point clouds (with and points, respectively), the matrix can be interpreted of as a rigid alignment between them. Starting with some initial matrix , first find the closest row in for each th row of (this step is equivalent to closest point correspondence in ICP). Second, find an orthonormal minimizing (this is equivalent to the alignment step in ICP). The process is repeated until convergence, producing pointwise correspondences .
Finally, note that in order for system (5) to be (over) determined, we must have , where is the number of given corresponding functions (data). At the same time, the quality of the correspondence depends on : the more basis elements are taken, the better, since due to truncation of the Fourier expansion, small results in poor spatial localization of the correspondence. Furthermore, truncated Fourier series of a discontinuous function manifest oscillations, a behavior known in harmonic analysis as the “Gibbs phenomenon” (see examples in Figure 5).
Permuted sparse coding
Pokrass et al. [35] addressed the case when the ordering of the columns of is unknown and is modeled by a permutation matrix . Furthermore, they noticed that for nearisometric manifolds, the eigenbases satisfy , i.e., the matrix is approximately diagonal. The diagonal structure of was induced using an penalty,
(6) 
where and is a weight matrix with zero diagonal and denotes the Hadamard (elementwise) matrix multiplication. The problem was solved by means of alternating minimization w.r.t. (which turns out a linear assignment problem) and (which is equivalent to sparse coding).
Coupled diagonalization
Kovnatsky et al. [20] noticed that the matrix encoding the correspondence depends on the choice of bases, and proposed finding new approximate eigenbases and by means of orthonormal matrices in which the Fourier coefficients of and are as similar as possible. The new bases behave as eigenbases if they approximately diagonalize the respective Laplacians, i.e., (respectively, ), where and . This property is enforced in the optimization problem
(7)  
(with ) using the offdiagonal penalty .
After , are found, in the new bases , the matrix is approximately diagonal. Then, the system of equations is solved for diagonal elements of only.
3 Functional maps as matrix completion
Kalofolias et al. [17] studied the problem of matrix completion on graphs, where the rows and columns of the matrix have underlying geometric structure. They show that adding geometric structure to the standard matrix completion problem improves recovery results.
We use the same philosophy to formulate the problem of finding a functional map as matrix completion, whose rows and columns are interpreted as functions on the respective manifolds and . For this purpose, we consider the matrix as a collection of columns or rows , where and denote the th column and th row of , respectively. The column is the function on corresponding to a delta located at point on . Similarly, the row is the function on corresponding to a delta located at point on .
As in the classical matrix completion problem, we aim at recovering the unknown correspondence matrix from a few observations of the form , looking for a matrix that explains the data in a “simple” way, in the sense discussed in the following. Our problem comprises the following terms:
Data term
The correspondence should respect the data, which is achieved by minimizing .
Smoothness
The correspondence must be smooth, in the sense that if are two close points on , then the respective corresponding functions are similar, i.e., (see Figure 2). Similarly, for close points on , the rows . Smoothness is achieved by minimizing the row and column Dirichlet energy .
Localization
“Simplicity”
By simplicity, we mean here that the correspondence matrix is ‘explained’ using a small number of degrees of freedom. The following models are commonly used in matrix completion and recommendation systems literature.
Fixed rank. The most straightforward model is to assume that . In this setting, we can decompose into the factors of size and , respectively. Combining all the above terms leads to the optimization problem
(8)  
where are parameters determining the tradeoff between smoothness and localization.
Low rank. Another popular model is to find the matrix with smallest . However, this minimization is known to be NPhard, and the nuclear norm is typically used as a convex proxy, leading to
(9)  
This problem is convex and is typically solved using augmented Lagrangian methods [5].
Low norm. Srebro et al. [43] rewrite problem (9) using the decomposition with and of size and , respectively. Note that unlike the fixed rank case, here can be arbitrarily large. The nuclear norm is written as . We thus have the problem
(10)  
Unlike (9), this problem is nonconvex w.r.t. both and , however behaves well for large [43].
3.1 Subspace parametrization
The main disadvantage of problems (8)–(10) is that the number of variables depends on the number of samples , which may result in scalability issues for large () manifolds. To overcome this issue, we approximate the and factors in the truncated Laplacian eigenbases of and using first expansion terms, and , where matrices of the expansion coefficients are of size . This leads to a subspace version of problem (10) ,
where we used the invariance of the Frobenius norm to orthogonal transformations and the fact that , to simplify the expressions.
Note that now the number of variables is independent of the number of samples. We emphasize that can be arbitrarily large and are dictated only by complexity considerations and not by the amount of data. This is one of the major advantages of our approach compared to [31, 35, 20], which we demonstrate experimentally in the following section (see examples in Figure 5). Finally, we observe that the subspace version of the fixed rank problem (8) is a particular setting of (3.1) with .
3.2 Relation to other approaches
Ovsjanikov et al. [31] solve a fixedrank approximation problem for expressed in the truncated Laplacian eigenbases as . It boils down to a particular setting of our formulation (3.1) with and . Note that in (5), is bounded by the size of the data : if , the system of equations in (5) becomes underdetermined. As a consequence, in scarce data settings (), the methods performs very poorly (see Figure 5). Pokrass et al. [35] solve the problem of [31] with additional prior on the diagonal structure of , which holds only for approximately isometric manifolds. Our method is generic and can handle highly nonisometric manifolds.
Kovnatsky et al. [20] look for a pair of bases in which one tries to achieve a diagonal . The bases are constructed as a rotation of the Laplacian eigenbases (7) by means of orthonormal matrices . This problem is equivalent to the setting of our problem (3.1) with , with the only difference that the Dirichlet energy terms are replaced with offdiagonality terms.
Finally, we emphasize that our method is “basisfree”: we work directly with the correspondence matrix, and rather than imposing some structure in the correspondence representation in some basis (as done in prior works of [35, 20, 30]), impose structure on the correspondence matrix directly. This also allows us to work simultaneously with very different manifold representations (such as meshes and graphs, see Figure 8). The factors in our problem can be interpreted as ad hoc
bases in which the correspondence is represented as the identity matrix,
. These bases are coupled (see Figure 3, bottom), and thus our problem also acts similarly to the simultaneous approximate Laplacian diagonalization procedure [20].3.3 Numerical optimization
Manifold optimization
We used manifold optimization toolbox manopt [4] to solve our problem, in which the nondifferentiable norm term was approximated using . The main idea of manifold optimization is to model the space of fixedrank matrices as a Riemannian manifold and perform descent (e.g. by the method of conjugate gradients) on this manifold. The intrinsic gradient of the cost function on is a vector in the tangent space; in order to construct the conjugate gradient search direction, gradients from previous iterations must be brought to the same tangent space by means of parallel transport on . After performing a descent step on the tangent space, the iterate is retracted to the manifold by means of the exponential map. For additional details on the use of manifold optimization in matrix completion problems, we refer the reader to [47].
Initialization
Assume we are given some as initial correspondence matrix. We initialize our problem by first computing the SVD of and then setting and . In our experiments, we used the correspondence of Ovsjanikov et al. as the initial .
4 Results
4.1 3D Shapes
Data
We used the TOSCA [6] dataset containing CAD models of human and animal with 5K50K vertices, and the SCAPE [1] dataset, containing meshes of scanned humans in different poses with about 10K vertices. For meshes, we used the cotangent formula (2) to construct the Laplacian. Point clouds were created by removing the mesh structure from triangular meshes; nearest neighbors were then used to construct unnormalized graph Laplacians according to (1) with found using the selftuning method [52]. In our experiments, as the data term we used one of the following three types of descriptors (computed fully automatically):
Segments produced by the persistencebased segmentation of [41], represented as binary indicator functions. A typical number of segments per shape was .
Localized WKS+HKM. 100 Wave kernel signatures (WKS) [2] and 100 Heat kernel maps (HKM) [32], localized on the aforementioned segments, resulting in a total of functions per segment, or functions per shape.
Shape MSER. Stable regions detected using the method of [24], around regions per shape. The MSER regions were represented as binary indicator functions.
The Segments, WKS, and HKM descriptors were provided by Ovsjanikov. ShapeMSER descriptors were computed using the code and settings of [24].
Performance criteria
For the evaluation of the correspondence quality, we used the Princeton benchmark protocol [19] with two error criteria:
Hard error is the criterion used in [19] for pointwise maps. Assume that a correspondence algorithm produces a pair of points , whereas the groundtruth correspondence is . Then, the inaccuracy of the correspondence is measured as
(12) 
and has units of normalized length on (ideally, zero). Here is the local area element on .
Soft error is the generalization of (12) to soft maps. Under functional correspondence, , which ideally should be . The inaccuracy of functional correspondence is measured as the average geodesic distance from the th point on , weighted by the absolute value of the corresponding function,
(13) 
For a set of values , we plot the percentage of points producing correspondence better than (the higher the curve, the better).
Influence of parameters
To study the behavior of our problem, we conducted a set of experiment in ‘sterile’ conditions, using a pair of nearisometric human shapes with ideally corresponding . The number of corresponding functions in the data term was . The correspondence quality was evaluated using the soft criterion (13), to rule out any artifacts introduced by conversion to pointwise map.
Rank. We set problem (3.1) in the fixed rank regime with parameters , , , , and varied the rank . For comparison, we ran the methods of Ovsjanikov et al. [31] and Pokrass et al. [35] with the same . Note that the performance of [31, 35] degrades when , as opposed to increase in the performance of our approach (Figure 5). Practically, this means that the correspondence quality they can obtain is bounded by the amount of given data, while our method can work well with very scarce data.
Localization We repeated the same experiments with , fixed, and varying . Increasing the weight of the term results in better localization of the correspondence (Figure 6), and, as a consequence, better performance.
Princeton benchmark
We used the Princeton evaluation protocol (allowing for symmetric matches, since our method is fully intrinsic and thus produces correspondence defined up to intrinsic symmetry) on the SCAPE dataset and followed verbatim the settings of [31, 35].
We compared the methods of [31, 35] (using code provided by the authors) and our method, in two settings: scarce data, using Segments (one binary indicator function per segment), and rich data, using localized WKS+HKM ( functions per segment). Our method was used with the parameters , for the first case, and , for the second. In both cases, we used , , , and . Resulting functional maps were converted into pointwise maps using the ICPlike method of [31]. Correspondence quality was evaluated using the hard error criterion. For comparison, we also reproduce the performance of some recent stateoftheart pointwise correspondence algorithms: blended maps [19], best conformal (leastdistorting map without blending), Möbius voting [23], HKM [32], and GMDS [7]. The results are shown in Figure 7. Our method achieves stateoftheart performance; its advantage is especially prominent in the scarce data setting.
Robustness
Figure 8 depicts examples of shape correspondence obtained with the proposed technique. The top row shows examples of SCAPE shape matching using Segments data. Given that this data is very scarce ( segments per shape), we find it quite remarkable that our method is able to obtain very high quality correspondence. Second row shows examples of correspondence in the presence of geometric and topological noise, using ShapeMSER data. Our method handles well even large missing parts. Third row shows examples of nonisometric shape matching, and fourth row shows a particularly challenging setting of mesh to point cloud matching.
4.2 Abstract manifolds
Visual manifolds
We used the dataset of [15] containing 831 120100 images of a human face depicted with varying pitch and yaw angles (Figure 9, left) and 698 6464 images of a statue with varying pitch and yaw angles, as well as lighting (Figure 9, right). Each image can be considered as a point on the “visual manifold” embedded in a highdimensional space (12Kdimensional in the case of faces and 4096dimensional in case of statues); the structure and intrinsic dimension of the two manifolds is different (the statue has additional dimensions due to varying lighting). In our discrete setting, the intrinsic structure of the manifolds was represented as unnormalized graph Laplacians (1) with 20nearest neighbors and selftuning scale [52]. 25 pairs of roughly corresponding face and statue poses shown in Figure 9 were used as the data (). The goal of the experiment was to establish a meaningful correspondence between the two visual manifolds given this sparse set of corresponding points. We compared the functional correspondences obtained with the methods of Ovsjanikov et al. [31], Pokrass et al. [35], and the proposed method, for which we used the settings , and .
Figure 10 depicts the obtained correspondences, showing that our method achieves a better correspondence in terms of the proximity to groundtruth pitch and yaw angles.
Flickr tagged images
We used the multimodal dataset from [14], a challenging subset of 145 annotated Flicker images taken from the NUSWIDE dataset [11]. The visual data is represented as 64dimensional color histograms; the text annotations are represented by 1000dimensional bags of words. The dataset is noisy and heterogeneous: each image has 3–29 tags some of which are irrelevant or wrong.
We represented the image and text modalities structures using graph Laplacians constructed using 20 and 17 nearest neighbors, respectively, and selftuning scale [52]. 25 corresponding functions were used as the data to construct the functional correspondence using our method with settings , and . Then, for each image we created a word cloud as follows: the absolute values of the functional map of the image on the tags manifold was used as the weight; seven most important tags were selected and shown in font size proportional to the tag weight (Figure 11). Note that since some images have less than 3 groundtruth tags, hence the rest of the tags may be irrelevant; however, their importance is tiny (e.g. rightmost image in Figure 11).
5 Conclusions
We presented a novel method for finding functional correspondence between manifolds based on the geometric matrix completion framework. We model the correspondence between the spaces of functions on two manifolds as a matrix, whose rows and columns have the geometric structure of the underlying manifolds, captured through their Laplacian operators. We discuss several flavors of our problem, and present an efficient subspace version thereof.
Experimental results on standard shape correspondence benchmarks show that our method beats some of the recent stateoftheart methods, and compares particularly favorably to previous functional correspondence methods in situations when the given correspondence data is very scarce. This allows tackling challenging cases such as matching nonisometric shapes, shapes with missing parts, different representation (meshes vs point clouds) and shapes corrupted by geometric and topological noise. Furthermore, we applied our method to general highdimensional multimodal data and the obtained correspondences are meaningful and robust. The proposed approach may cope with any other multimodal data – the only change would be the construction of the Laplacian that captures its geometric structure.
In terms of optimization, while manopt is wellsuited for smooth optimization, it would be advantageous to devise a version of manifold optimization for nonsmooth functions such as the norm. An alternative path is to use Augmented Lagrangian methods.
On a more philosophical note, we believe that the importance of our work is in bringing a wellknown signal processing technique into the domain of geometric processing, continuing the trend of [35] who introduced sparse coding to the problem of shape correspondence. Conversely, our work follows the very recent trend of bringing geometric structure into signal processing problems such as compressed sensing and matrix completion. We believe that a crossfertilization of these two fields could be very fruiful.
References
 [1] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, and J. Davis. Scape: shape completion and animation of people. TOG, 24(3):408–416, 2005.
 [2] M. Aubry, U. Schlickewei, and D. Cremers. The wave kernel signature: A quantum mechanical approach to shape analysis. In Proc. 4DMOD, 2011.
 [3] P. J. Besl and N. D. McKay. A method for registration of 3D shapes. PAMI, 14(2):239–256, 1992.
 [4] N. Boumal, B. Mishra, P.A. Absil, and R. Sepulchre. Manopt, a Matlab toolbox for optimization on manifolds. JMLR, 15:1455–1459, 2014.
 [5] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1–122, 2011.
 [6] A. Bronstein, M. Bronstein, and R. Kimmel. Numerical Geometry of NonRigid Shapes. Springer, 2008.
 [7] A. M. Bronstein, M. M. Bronstein, and R. Kimmel. Generalized multidimensional scaling: a framework for isometryinvariant partial surface matching. PNAS, 103(5):1168–1172, 2006.
 [8] A. M. Bronstein, M. M. Bronstein, R. Kimmel, M. Mahmoudi, and G. Sapiro. A GromovHausdorff framework with diffusion geometry for topologicallyrobust nonrigid shape matching. IJCV, 89(23):266–286, 2010.
 [9] M. M. Bronstein and I. Kokkinos. Scaleinvariant heat kernel signatures for nonrigid shape recognition. In Proc. CVPR, 2010.
 [10] Y. Chen and G. Medioni. Object modeling by registration of multiple range images. In Proc. Conf. Robotics and Automation, 1991.
 [11] T. S. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y. T. Zheng. Nuswide: A realworld web image database from national university of singapore. In Proc. CIVR, 2009.
 [12] R. R. Coifman, S. Lafon, A. B. Lee, M. Maggioni, B. Nadler, F. Warner, and S. W. Zucker. Geometric diffusions as a tool for harmonic analysis and structure definition of data: Diffusion maps. PNAS, 102(21):7426–7431, 2005.
 [13] A. Elad and R. Kimmel. Bending invariant representations for surfaces. In Proc. CVPR, 2001.
 [14] D. Eynard, K. Glashoff, M. M. Bronstein, and A. M. Bronstein. Multimodal diffusion geometry by joint diagonalization of laplacians. arXiv:1209.2295, 2012.

[15]
J. Ham, D. Lee, and L. Saul.
Semisupervised alignment of manifolds.
In
Proc. Workshop on Artificial Intelligence and Statistics
, 2005.  [16] O. Kaick, H. Zhang, G. Hamarneh, and D. CohenOr. A survey on shape correspondence. Computer Graphics Forum, 20:1–23, 2010.
 [17] V. Kalofolias, X. Bresson, M. Bronstein, and P. Vandergheynst. Matrix completion on graphs. arXiv:1408.1717, 2014.
 [18] V. G. Kim, Y. Lipman, X. Chen, and T. A. Funkhouser. Möbius transformations for global intrinsic symmetry analysis. Computer Graphics Forum, 29(5):1689–1700, 2010.
 [19] V. G. Kim, Y. Lipman, and T. Funkhouser. Blended intrinsic maps. TOG, 30(4):79, 2011.
 [20] A. Kovnatsky, M. M. Bronstein, A. M. Bronstein, K. Glashoff, and R. Kimmel. Coupled quasiharmonic bases. Computer Graphics Forum, 32:439–448, 2013.
 [21] M. Leordeanu and M. Hebert. A spectral technique for correspondence problems using pairwise constraints. In Proc. ICCV, 2005.
 [22] B. Lévy. LaplaceBeltrami eigenfunctions towards an algorithm that understands geometry. In Proc. SMI, 2006.
 [23] Y. Lipman and I. Daubechies. Conformal Wasserstein distances: Comparing surfaces in polynomial time. Advances in Mathematics, 227(3):1047 – 1077, 2011.
 [24] R. Litman, A. M. Bronstein, and M. M. Bronstein. Diffusiongeometric maximally stable component detection in deformable shapes. Computers & Graphics, 35(3):549–560, 2011.
 [25] D. Mateus, R. P. Horaud, D. Knossow, F. Cuzzolin, and E. Boyer. Articulated shape matching using laplacian eigenfunctions and unsupervised point registration. In Proc. CVPR, 2008.
 [26] F. Mémoli. GromovWasserstein Distances and the Metric Approach to Object Matching. Foundations of Computational Mathematics, pages 1–71, 2011.
 [27] F. Mémoli and G. Sapiro. A theoretical and computational framework for isometry invariant recognition of point cloud data. Foundations of Computational Mathematics, 5(3):313–347, 2005.
 [28] M. Meyer, M. Desbrun, P. Schröder, and A. H. Barr. Discrete differentialgeometry operators for triangulated 2manifolds. Visualization and Mathematics III, pages 35–57, 2003.
 [29] N. J. Mitra, N. Gelfand, H. Pottmann, and L. J. Guibas. Registration of point cloud data from a geometric optimization perspective. In Proc. Eurographics Symposium on Geometry Processing, pages 23–32, 2004.
 [30] T. Neumann et al. Compressed manifold modes for mesh processing. Computer Graphics Forum, 33(5):35–44, 2014.
 [31] M. Ovsjanikov, M. BenChen, J. Solomon, A. Butscher, and L. Guibas. Functional maps: A flexible representation of maps between shapes. TOG, 31(4), 2012.
 [32] M. Ovsjanikov, Q. Mérigot, F. Mémoli, and L. Guibas. One point isometric matching with the heat kernel. Computer Graphics Forum, 29(5):1555–1564, 2010.
 [33] V. Ozoliņš, R. Lai, R. Caflisch, and S. Osher. Compressed modes for variational problems in mathematics and physics. PNAS, 110(46):18368–18373, 2013.
 [34] U. Pinkall and K. Polthier. Computing discrete minimal surfaces and their conjugates. Experimental Mathematics, 2:15–36, 1993.
 [35] J. Pokrass et al. Sparse modeling of intrinsic correspondences. Computer Graphics Forum, 32:459–468, 2013.
 [36] E. Rodolà, A. M. Bronstein, A. Albarelli, F. Bergamasco, and A. Torsello. A gametheoretic approach to deformable shape matching. In Proc. CVPR, 2012.
 [37] E. Rodolà, S. R. Bulo, T. Windheuser, M. Vestner, and D. Cremers. Dense nonrigid shape correspondence using random forests. In Proc. CVPR, 2014.
 [38] Y. Sahillioğlu and Y. Yemez. Coarsetofine combinatorial matching for dense isometric shape correspondence. Computer Graphics Forum, 30(5):1461–1470, 2011.
 [39] J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A. Blake, M. Cook, and R. Moore. Realtime human pose recognition in parts from single depth images. Communications of the ACM, 56(1):116–124, 2013.
 [40] A. Shtern and R. Kimmel. Matching lbo eigenspace of nonrigid shapes via high order statistics. arXiv:1310.4459, 2013.
 [41] P. Skraba, M. Ovsjanikov, F. Chazal, and L. Guibas. Persistencebased segmentation of deformable shapes. In Proc. NORDIA, 2010.
 [42] J. Solomon, A. Nguyen, A. Butscher, M. BenChen, and L. Guibas. Soft maps between surfaces. In Computer Graphics Forum, volume 31, pages 1617–1626, 2012.
 [43] N. Srebro, J. Rennie, and T. S. Jaakkola. Maximummargin matrix factorization. In Proc. NIPS, 2004.
 [44] A. Tevs, A. Berner, M. Wand, I. Ihrke, and H.P. Seidel. Intrinsic shape matching by planned landmark sampling. Computer Graphics Forum, 30(2):543–552, 2011.
 [45] L. Torresani, V. Kolmogorov, and C. Rother. Feature correspondence via graph matching: Models and global optimization. In Proc. ECCV, 2008.
 [46] S. Umeyama. An eigendecomposition approach to weighted graph matching problems. PAMI, 10(5):695–703, 1988.
 [47] B. Vandereycken. Lowrank matrix completion by Riemannian optimization. SIAM J. Optimization, 23(2):1214–1236, 2013.

[48]
U. Von Luxburg.
A tutorial on spectral clustering.
Statistics and computing, 17(4):395–416, 2007.  [49] C. Wang and S. Mahadevan. Manifold alignment without correspondence. In IJCAI, 2009.
 [50] F. Wang, Q. Huang, M. Ovsjanikov, and L. Guibas. Unsupervised multiclass joint image segmentation. In Proc. CVPR, 2014.
 [51] A. Zaharescu, E. Boyer, K. Varanasi, and R. Horaud. Surface feature detection and description with applications to mesh matching. In Proc. CVPR, 2009.
 [52] L. ZelnikManor and P. Perona. Selftuning spectral clustering. In Proc. NIPS, 2004.
 [53] Y. Zeng, C. Wang, Y. Wang, X. Gu, D. Samaras, and N. Paragios. Dense nonrigid surface registration using highorder graph matching. In Proc. CVPR, 2010.