Shape Analysis via Functional Map Construction and Bases Pursuit

09/29/2019 ∙ by Omri Azencot, et al. ∙ Rensselaer Polytechnic Institute 0

We propose a method to simultaneously compute scalar basis functions with an associated functional map for a given pair of triangle meshes. Unlike previous techniques that put emphasis on smoothness with respect to the Laplace–Beltrami operator and thus favor low-frequency eigenfunctions, we aim for a spectrum that allows for better feature matching. This change of perspective introduces many degrees of freedom into the problem which we exploit to improve the accuracy of our computed correspondences. To effectively search in this high dimensional space of solutions, we incorporate into our minimization state-of-the-art regularizers. We solve the resulting highly non-linear and non-convex problem using an iterative scheme via the Alternating Direction Method of Multipliers. At each step, our optimization involves simple to solve linear or Sylvester-type equations. In practice, our method performs well in terms of convergence, and we additionally show that it is similar to a provably convergent problem. We show the advantages of our approach by extensively testing it on multiple datasets in a few applications including shape matching, consistent quadrangulation and scalar function transfer.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 7

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Functional maps (FM) (Ovsjanikov et al., 2012) were recently introduced in the geometry processing community in the context of shape matching. During the last few years, FM were quickly adopted by many, serving as the key building block in a range of shape analysis frameworks. Applicative instances include mesh quadrangulation or fluid simulation tasks, in addition to the original shape correspondence problem. The goal of this paper is to propose an efficient and easy to code framework for computing improved functional map matrices.

The key idea behind FM is that instead of aligning points as in Iterative Closest Point (ICP) approaches (Besl and McKay, 1992), it is often simpler to align scalar functions defined on the input shapes. Thus, a typical pipeline for computing functional maps is composed of three steps. Given a pair of shapes, one first collects a set of corresponding descriptors, such as the Wave Kernel Signature (Aubry et al., 2011). Second, one performs dimensionality reduction by projecting the descriptors onto a spanning subspace of basis functions. Finally, one solves an optimization problem, seeking a matrix that best aligns the projected features, possibly while minimizing additional regularizing terms.

[width=]faust_pod_vis LBPOD

Figure 2. We show above some of the modes associated with the Laplace–Beltrami (LB) operator (left) and the Proper Orthogonal Decomposition (POD) as computed for the Wave Kernel Signature (Aubry et al., 2011) and segment information (Kleiman and Ovsjanikov, 2018) (right). While the LB modes encode the intrinsic geometry of the shape, they struggle with representing high frequency signals. In comparison, the POD modes faithfully represent the descriptors’ spectrum while implicitly capturing the geometry as encoded in the given features.

Numerous extensions to the original pipeline (Ovsjanikov et al., 2012)

were proposed in the literature. These extensions can be generally classified into two research avenues. On the one hand, recent works focus on the formulation of novel regularization terms that can be incorporated into the functional map computation phase. For instance, cycle-consistency is promoted in 

(Huang et al., 2014), whereas (Nogneng and Ovsjanikov, 2017) favor the preservation of the given descriptors. On the other hand, some papers aim to improve the functional subspaces onto which the features are being projected. For example, (Kovnatsky et al., 2013) design basis elements to account for sign or ordering ambiguities. In this context, our work contributes in that it combines the tasks of functional map computation and basis design into a single unified framework. Our formulation allows to harness the advancements in functional map regularization as well as to benefit from the increase in the search space of solutions when the bases are allowed to change during the optimization.

Choosing a good basis set is crucial in FM applications. In the original work (Ovsjanikov et al., 2012), the authors propose to utilize the spectrum of the Laplace–Beltrami (LB) operator as the spanning subspace in the second step of the pipeline. More generally, over the last few years, LB bases became the prevailing choice for function representation in many geometry processing tasks such as computing descriptors (Rustamov, 2007), distances (Solomon et al., 2014), and generating shape segments (Reuter et al., 2009), just to name a few. While this choice can be optimal under certain conditions (Aflalo et al., 2016), it may sometimes lead to subpar results. To this end, Kovnatsky and others (Kovnatsky et al., 2013, 2015, 2016; Litany et al., 2017) optimize for joint diagonalizable (JD) basis elements to improve shape matching tasks. Solving for the bases increases the solution space, allowing to produce better correspondences.

In this paper, we address two limitations that appear in most existing work on functional maps. The first shortcoming is related to the common choice of the LB spectrum. While LB encodes the geometry of the surface, it is completely independent of the selected features, potentially introducing large representation errors. Indeed, high frequency signals such as locally supported functions will exhibit poor spectral representations (Nogneng et al., 2018). Thus and in contrast to previous work, our approach is based on designing basis elements that are tailored to a given collection of descriptors. In practice, we observe that employing LB-based representations often leads to the elimination of many degrees of freedom that could be re-introduced into the problem. Instead of using LB, we utilize the Proper Orthogonal Decomposition (POD) modes for dimensionality reduction purposes. POD subspaces share many of the advantageous properties of LB—they are orthonormal and have a natural ordering. However, POD modes are superior to LB in capturing high frequency data and thus improve descriptors’ transfer between shapes.

The second limitation we alleviate deals with the split between the tasks of basis design and functional map computation. Indeed, most existing work focus only on one of these tasks: facilitating a fixed basis or alternatively using a closed-form solution for the functional map. Instead, we propose to merge these objectives into one larger problem. In practice, this modification leads to an increase in the search space, allowing to find better solutions for a given problem. In addition, we can independently regularize and constrain the bases and the functional map to the specific requirements of the application at hand, leading to a flexible yet effective framework.

The main contribution in this work is an effective minimization framework for computing functional basis sets on a pair of shapes and a corresponding functional map. The resulting optimization is unfortunately highly non-linear and non-convex. Nevertheless, we construct a novel and highly efficient Alternating Direction of Multipliers Method (ADMM) scheme. Specifically, our unique choice of auxiliary variables allows to naturally incorporate state-of-the-art complex regularizers promoting e.g., cycle-consistency or metric preservation. Moreover, our scheme converges empirically and we additionally show that our method is similar to a provably convergent procedure. We evaluate our approach on several shape analysis tasks including shape matching, joint quadrangulation and function transfer. Our comparison to previous work indicates that our method achieves beyond state-of-the-art results in shape correspondences on challenging scenarios where the shapes do not share the connectivity and or only approximately isometric.

2. Related Work

Functional maps (Ovsjanikov et al., 2012) have recently gained a lot of attention in geometry processing and related fields. Some of the applications in which functional maps were found useful include shape exploration (Rustamov et al., 2013), fluid simulation (Azencot et al., 2014) and function transfer (Nogneng et al., 2018). We refer the interested reader to a recent course discussing the functional map framework and a few related applications (Ovsjanikov et al., 2016).

One of the main scenarios in which functional maps are employed is for computing shape correspondences between a given pair or collection of shapes. In this context, many works extend the original approach (Ovsjanikov et al., 2012) to include various regularization terms. For instance, Nogneng and Ovsjanikov (2017) show that minimizing commutativity with descriptor operators leads to better functional maps. In (Huang et al., 2014), the authors promote consistency with respect to the inverse mapping, and recently, (Ren et al., 2018) formulate orientation preserving terms into the functional maps pipeline. In addition to improving the accuracy of functional map matrices, the approaches for extracting point-to-point maps are also under development. (Rodolà et al., 2015)

cast this problem as a probability density function estimation, whereas 

(Ezuz and Ben-Chen, 2017) propose to minimize the error from projecting delta functions onto the basis and its orthogonal complement. Finally, (Ren et al., 2018) iteratively alternate between improving the map in its spectral and spatial domains.

In all of the above works, while the functional map could be computed in any scalar basis, the eigenfunctions of the Laplace–Beltrami operator are typically used. This choice is natural given the wealth of theoretical results related to the LB spectrum, but, on the other hand, it is completely independent of the input descriptors. Recently, (Schonsheck et al., 2018) proposed to design a basis via a conformal deformation while the other basis set is fixed. Probably closest to our approach is the line of work of (Kovnatsky et al., 2013, 2015, 2016; Litany et al., 2017) where the authors look for spectral coefficients such that the resulting basis elements are as close as possible to the LB eigenfunctions while preserving the given constraints. In contrast to their perspective, we advocate the use of a linear domain in which the descriptors are better represented, in addition to the incorporation of different regularizers. We provide a detailed comparison between our method and theirs in 7.3.

3. Motivation and Background

To motivate our approach, we will need the following notation. Let and be a pair of manifold triangle meshes, where are their vertex sets and are their triangle sets. We represent scalar functions using real values on vertices, i.e., is a scalar function on , and similarly, is a function on . Thus, and

are real-valued vectors of sizes

and , respectively. We define the inner product on to be

where is the diagonal (lumped) mass matrix of the nodes of (see e.g., (Botsch et al., 2010, Chap. 3)), and similarly, we have . The input to our problem is a collection of functional constraints and such that and encode the same information but on different meshes, for every . Finally, we arrange the given constraints in matrices,

In its most simple form, the task of computing functional maps consists of finding a matrix that aligns the descriptors, i.e.,

where is a basis of scalar functions on for . Typically, is a moderately sized matrix with . If we assume that the and matrices are fixed, it is natural to ask whether optimizing for and for the matrices will yield improved feature matching. We show in Fig. 1 an example of a functional map with its bases obtained in this way (left), leading to a high quality map between non-isometric shapes (right). Solving for the bases and map significantly increases the parameters from to , resulting in a challenging to solve problem as are very large. To deal with this issue, we can consider a subspace of solutions of size , where represent the spectral dimensions of some fixed bases. That is, instead of finding spatial bases, we look for spectral coefficient matrices onto predefined linear subspaces.

Figure 3. We plot the error distributions of feature matching when LB and POD bases are used. The above results show that designing POD modes is beneficial both in the spectral and spatial domains.

In practice, most existing work utilize the subspace spanned by the LB eigenfunctions. In this work, we propose to change this common choice and take subspaces that better fit the given descriptors. There are several approaches in the machine learning community that could be investigated to achieve this objective. In this work, we advocate the utilization of the Proper Orthogonal Decomposition (POD) modes 

(Berkooz et al., 1993)

, which can be classified as a linear manifold learning method. Given a set of descriptors, the POD can be easily computed using the Singular Value Decomposition, see Sec. 

6. One of the main reasons for preferring POD modes over other bases is due to the Karhunen–Loéve theorem, stating that these modes best approximate the input data under many choices of norms (Xiu, 2010).

Therefore, incorporating POD modes instead of the LB spectrum may be considered as a data-driven approach for representing and manipulating signals. We show in Fig. 2 a few modes related to the LB operator (left) and resulting from POD computation (right). One significant difference between these bases is that POD modes allow for higher frequencies when compared to a similar truncation of LB. For example, the LB is significantly smoother than its related POD . Moreover, encoding descriptors in a POD subspace induces less information loss in comparison to LB representations in the context of designing bases and functional maps, as we show below.

To quantify the difference between the LB and POD subspaces, we measure the average matching error distributions per mode and per vertex, namely

where the squares are taken pointwise, i.e., and . In Fig. 3 we compare these errors when the bases are fixed as well as designed. Our results indicate that the fixed POD subspaces are extremely accurate for matching in the spectral domain, but yield the most error spatially. Moreover, LB bases produce poor results when designed, e.g., using joint diagonalization methods (Kovnatsky et al., 2013) for both error measures. Finally, designed POD modes give the most accurate estimation in the spatial domain and is second best in the spectral regime. We remark that for the POD case these errors naturally depend on the particular descriptors in use. In our applications, we employ a mixture of features such as the Wave Kernel Signature (Aubry et al., 2011) or landmarks provided by the user.

4. Functional Map and Basis Search (FMBS)

Our main goal is to find basis matrices and and a functional map such that these objects best align the constraints and . To reduce clutter, we scale each by its corresponding and denote . Formally, we consider the problem

(1)
subject to

where the terms can be viewed as projecting the constraints onto the basis matrices. The equality conditions given by constrain the bases to be orthogonal with respect to the mass matrix. Unfortunately, the minimization problem (1) is highly non-linear and non-convex, and thus practical solvers are challenging to construct. To alleviate these difficulties, we propose in the next section a splitting scheme that is based on the Alternating Direction Method of Multipliers (ADMM) (Gabay and Mercier, 1975; Glowinski and Marroco, 1975).

5. An ADMM Approach to FMBS

The basic idea behind ADMM depends on splitting the original complex optimization task into simpler subproblems that can be solved efficiently. Under certain conditions on the objective function and constraints, it can be shown that ADMM converges. Therefore, ADMM is often the optimization framework of choice, arguably due to its computational complexity and theoretical guarantees. To allow splitting in our problem (1) above, we introduce the auxiliary variables and and arrive at the following optimization

(2)
subject to

where is the data fidelity term. We stress that while ADMM can be viewed as a standard optimization technique, the choice of auxiliary variables is highly dependent on the problem and there is no general rule for how to “correctly” set these variables. In particular, our unique choice leads to an empirically converging scheme for a large range of parameters, and it further allows for a natural incorporation of novel regularizers (11). Finally, we mention that the auxiliary variables linearize the difficult orthogonality constraints which may lead to non-orthogonal bases in practice. However, this issue can be solved in a post-processing step.

To minimize (2) we facilitate an iterative scheme where at each step, the unknowns are updated in an alternating style. Namely, all the variables are kept fixed except for the one which is being updated. In our case, the update order for the primal variables is , followed by the update of the dual variables . We note that each of the subproblems is at most quadratic in the unknown, and thus can be solved efficiently. In what follows, we discuss in detail each of the update tasks including their formulation and solution. To shorten the mathematical formulations below, we omit the step with the understanding that the variables are updated in a sequential fashion as shown in Alg. 1. In addition, we denote by the scaled Lagrangian terms, i.e.,

where , and is a penalty parameter provided by the user and it may be updated during the optimization.

5.1. Updating the bases, and

The variable is being updated first, using the estimations of the other variables from the previous step. Specifically, we have

(3)

Computing the first order optimality conditions of (3) lead to a Sylvester Equation of the form

(4)

which can be efficiently solved with numerical algorithms such as (Golub et al., 1979) implemented via e.g., dlyap in MATLAB. We emphasize that the dimensionality of Eq. (4) introduces a practical challenge, as it involves dense matrices of size . These concerns, along with other implementation aspects, are considered in Section 6.

The update for is carried after the update of , but before the other variables. Therefore, we use the estimate of at step , whereas the rest of the variables are taken from the th step. The minimization takes the following form

(5)

Problem (5) is quadratic in , and its solution can be computed through the following linear system

(6)

5.2. Updating the auxiliary variables, and

The minimization problems associated with the unknowns and are similar. These optimization problems take the form

(7)

for . The solution is given via the linear system

(8)

5.3. Updating the functional map,

Given the basis matrices and , finding the best functional map that aligns the constraints in a least squares sense has a closed-form solution. Namely, we want to minimize the term with respect to , and the solution is given by

(9)

where is the pseudo-inverse of the matrix .

5.4. Updating the dual variables, and

The last step of our scheme is trivial and for it is given by

(10)

We summarize the above steps in pseudocode in Alg. 1. We note that generating is computationally prohibitive as is a large and dense matrix. However, we significantly reduce the computation costs by representing in a spectral subspace, as we discuss in Sec. 6.

Figure 4. Our fmbs

algorithm shows good empirical behavior for a large range of parameters. We compute the normalized energy and primal/dual residuals per normalized minimization step, across all our FAUST tests. The graphs above show the averaged and standard deviation of the energy (left), primal residual (middle) and dual residual (right), where all exhibit decay.

1:Input matrices and a scalar
2:Initialize
3:          
4:for  do
5:     Solve where Eq. (4)
6:       
7:       
8:       
9:       
10:     Solve where Eq.(6)
11:       
12:       
13:     Update by solving Eq. (8) with
14:     Update by solving Eq. (8) with
15:     Solve Eq. (9)
16:     Update the dual variables using Eq. (10)
17:     Update following Section 3.4 in (Boyd et al., 2011)
18:end for
Algorithm 1 Functional Map and Basis Search (fmbs)

5.5. Provably convergent FMBS

Most convergence results related to ADMM handle problems with convex objective functions and linear constraints. Recently, (Wang et al., 2018) and (Gao et al., 2018) extended the convergence analysis of ADMM to a significantly larger class of problems including non-convex objective terms and non-linear constraints. In particular, in the latter work, the authors investigate the case where biaffine constraints are given, namely, constraints involving two variables which become linear when one variable is kept fixed. For instance, our orthogonality conditions are exactly of this form. Moreover, (Gao et al., 2018) relax the convexity requirements on the objective function and allow to include differentiable terms instead.

In practice, Alg. 1 behaves well and it exhibits energy decrease for many choices of parameters as we show in Fig. 4 and in Sec. 7, however, it does not satisfy the conditions given in (Gao et al., 2018). To show convergence, we consider in App. A a different minimization (14) for which we can show the following result.

Proposition 1 ().

Under some mild conditions, problem (14) satisfies all the requirements in (Gao et al., 2018) and thus its ADMM converges to a constrained stationary point. That is, the sequence of variable updates is bounded and every limit point is a constrained stationary point.

Figure 5. We compute the geodesic error between the ground-truth mapped vertices to those mapped with the above methods, and we accumulate the amount of points that reached a certain error (Kim et al., 2011) and visualize it in the above graphs. Noticeably, our method yields the best averaged error over other methods for both ICP and BCICP map extraction. We note that basis design methods such as ours and AJD (Kovnatsky et al., 2013) gain the most from advanced mapping methods such as BCICP (Ren et al., 2018).

5.6. Regularized FMBS

One of the key aspects of our minimization (2) is that it introduces many degrees of freedom via the unknowns and . While in general it is a positive feature of our approach, the associated optimization requires a significant amount of descriptors . To relax this dependency, we propose to incorporate regularization terms into our problem. In particular, we add a consistency regularizer that takes into account the inverse functional map . Moreover, we add an isometry promoting term which is given by commutativity with the LB operator (Ovsjanikov et al., 2012) and Dirichlet energies that favor smooth basis elements. We note that other regularizers such as descriptor commutativity (Nogneng and Ovsjanikov, 2017) or orientation preservation (Ren et al., 2018) may be also considered. Formally, we propose the following objective function

(11)

where are penalty scalars, yields the trace of a matrix, and is the cotangent weights matrix (Pinkall and Polthier, 1993) of shape for . One of the key aspects of our framework resulting from our ADMM formulation (2) is that it allows to combine challenging regularizers (11) in a straightforward way. Thus, the formulation of the minimization that uses and its associated ADMM is somewhat technical, and we defer the derivation to the supplementary material.

[width=]cmp_map_tex_lr GTFMAPSAJDCFMDPCOurs

Figure 6. We visualize the quality of the computed correspondences using texture transfer. We note that the alternative methods struggle with elongated or small areas as the head and legs, whereas our method achieves improved results as can be compared to the ground-truth (GT).

6. Implementation Details

In what follows we describe a few technical aspects related to our method including dimensionality reduction of problem (2), variable initialization, stopping condition and the development platform.

Dimensionality reduction

Solving (2) directly is computationally prohibitive when the shapes consist many vertices. To overcome this difficulty, we propose to reduce the spatial dimension and use a spectral domain instead, allowing for fast computation times while retaining a significant amount of degrees of freedom. Specifically, we take the left singular vectors obtained by computing the Singular Value Decomposition (SVD) of the given constraints. Namely,

where we denote the most significant modes. In our experiments we choose such that covers at least of the spectrum. We note that other spectral bases could be considered, e.g., the LB basis itself (Kovnatsky et al., 2013). However, each choice leads to a different optimization having its own set of assumptions and challenges. In Sec. 7, we compare our approach to other methods.

To incorporate into our optimization, we denote the changes in boldface and perform the following modifications,

(12)

yielding matrices of sizes , and , respectively. Substituting the above components with their high dimensional counterparts is the only change needed to obtain a spectral version of Alg. 1. Finally, given , we reconstruct via . We note that while our approach strongly depends on the input features for deriving the low-dimensional subspaces, in our experiments we observed that it works quite well with a variety of descriptors such as WKS and segmentation information.

Variable initialization and stopping rule

In our tests we noticed that our method is robust to the choice of initial values. Nevertheless, we describe the particular values we used in our experiments. To initialize the primal variables and , we take the first singular vectors of the respective descriptors, . This computation is denoted by in Alg. 1. Using these bases, we can solve Eq. (9) to obtain an initial . The dual variables and are set to zero matrices of an appropriate size. The stopping condition we used is based on the primal and dual residuals and is detailed in (Boyd et al., 2011, Sec. 3.3), where the maximum number of steps is .

Dataset r
FAUST intra 0.9
FAUST inter 0.9
SCAPE 0.9
Remeshed FAUST intra 0.99
Remeshed FAUST inter 0.99
Remeshed SCAPE 0.99
Table 1. The parameter values used in our tests for each dataset.

Development platform and parameters

We implemented our method in MATLAB, using its built-in optimization tools such as dlyap and mldivide. Our approach was tested on an Intel Core i7 2.6GHz processor with 16GB RAM. We show in Fig. 7 a runtime comparison to AJD (Kovnatsky et al., 2013) and OPC (Ren et al., 2018) on meshes of sizes vertices. The parameters of our method include the penalty scalars and for the different energy terms (11). We list our choices in Tab. 1, which also shows how much of the spectrum we employ, given by the parameter. Finally, the size of the functional map and the associated bases was unless noted otherwise.

Figure 7. We compare the total pre-processing and computation times of the above methods on a pair of shapes for a large range of vertex counts, . Our method is significantly faster than AJD and OPC for high vertex counts, where for low number of vertices OPC is more efficient than our approach.

7. Evaluation and Results

To evaluate our method, we consider several applications in which functional maps are useful such as extraction of point-to-point maps (Ovsjanikov et al., 2012), function transfer (Nogneng et al., 2018), and consistent quadrangulation (Azencot et al., 2017). We test our approach on a variety of datasets including SCAPE (Anguelov et al., 2005), TOSCA (Bronstein et al., 2008) and FAUST (Bogo et al., 2014), and SHREC07 (Giorgi et al., 2007). In our comparison, we consider other methods for computing functional maps such as functional maps (FMAPS) (Ovsjanikov et al., 2012), approximate joint diagonalization (AJD) (Kovnatsky et al., 2013), coupled functional maps (CFM) (Eynard et al., 2016), descriptor preservation via commutativity (DPC) (Nogneng and Ovsjanikov, 2017) and orientation preserving correspondences (OPC) (Ren et al., 2018). In our comparison, we only use the functional map matrices as computed using the above techniques, and we discard any other improvements related to a specific application. In all cases, we used the authors’ recommended parameters or we searched for the best ones.

7.1. Extracting point-to-point maps

One of the main applications of functional maps is the computation of point-to-point correspondences between pairs of shapes. In our comparison, we consider two different scenarios. The first includes the original FAUST and SCAPE shapes using landmarks and Wave Kernel Signature (WKS) (Aubry et al., 2011) features. In the second case, we remesh the shapes and use consistent segmentation data (Kleiman and Ovsjanikov, 2018) with WKS descriptors. We emphasize that the latter scenario is extremely challenging as it is completely automatic, it involves approximate features, and the meshes have different connectivities. The pairs we use appeared previously in (Kim et al., 2011; Chen and Koltun, 2015). For map extraction we employ the ICP method proposed in (Ovsjanikov et al., 2012) and the recent BCICP approach (Ren et al., 2018), although other methods (Rodolà et al., 2015; Ezuz and Ben-Chen, 2017)

could be used. Our evaluation metrics include the computation of cumulative geodesic errors 

(Kim et al., 2011) and visualization of transferred scalar functions or textures.

In Fig. 5 we show the average cumulative geodesic errors of the first scenario. We note that our approach achieves a significant improvement over all the other competing methods. In particular, when ICP extraction is facilitated, our method yields very good results on FAUST intra which involves pairs of different poses of the same people. Interestingly, our method benefits the most from recent advances in map extraction techniques (Ren et al., 2018) as can be seen in the second row. Specifically, using BCICP increases the gap between our results vs. others on FAUST inter (different people, different pose) and SCAPE. This hints that our functional map and associated bases introduce more degrees of freedom which could be exploited in elaborated methods such as (Ren et al., 2018). This behavior can be additionally seen in AJD (Kovnatsky et al., 2013) BCICP results which surpass most methods even though their ICP measures were lower than others in general. We do not compare to OPC (Ren et al., 2018) in this setup as we use non-symmetric landmarks and thus there is no advantage in using their orientation preserving regularization over, e.g., DPC (Nogneng and Ovsjanikov, 2017).

[width=]cmp_energy_terms ++

Figure 8. To compare the advantages of regularization, we solve our problem with various combinations of energy terms. Using the obtained maps, we transport the coordinate functions from the target to the source and show the results above. We achieve a significant improvement when we regularize as can be seen around the chest and head (middle-right and right) vs. the non-regularized result (middle-left).

We demonstrate the error measures of Scenario in Fig. 16. We stress that this setup is particularly challenging as the shapes do not share the connectivity and we use automatically computed features. Nevertheless, our method exhibits the best results on FAUST both for the isometric and non-isometric cases when ICP map extraction is applied. Moreover, when we utilize BCICP on FAUST our method and OPC yield the best scores compared to the alternative methods. Finally, the remeshed SCAPE was an extremely difficult test case, leading to mappings of poor quality in general for most methods (notice the -axis gets to instead of ). For this dataset, CFM and DPC produced good measures for ICP, and our method and OPC were the highest with BCICP refinement.

The point-to-point correspondence allows to map information from the target to the source. In Fig. 6, we compare the mappings generated in Scenario on a single pair of FAUST intra using texture transfer. The meshes in this dataset are in correspondence and thus we can use the ground-truth (GT) map for comparison. Overall, the performance of the tested methods was generally good. However, small parts of the body such as hands and legs were less accurate for FMAPS, AJD and DPC. Moreover, other methods exhibit large errors in the head, whereas ours correctly finds the symmetry line (see the zoom below).

7.2. Effect of regularization

To evaluate the benefits of utilizing regularizing terms, we visualize the map quality via coordinate function transfer in Fig. 8. Indeed, there is a clear improvement when is introduced (middle right) vs. using alone (middle left) as can be seen on the chest and head. Adding (right) is not beneficial in this case as it is visually indistinguishable from the (middle right) case.

In addition to this visualization, we also run our algorithm on FAUST and SCAPE in scenario 1, using different regularization configurations. We show in Fig. 9 the cumulative geodesic error of these tests. For each dataset the solid line represents using only , the dashed is the result when we incorporate , the dotted line is produced by adding , and we get the dash-dot line by minimizing the full . On average, we observe a significant gain when regularization is used (see also the zoomed plots in Fig. 9). In particular, the consistency term (dash-dot line) helps both with respect to the accuracy of the results and the empirical convergence of the problem. We note that the Dirichlet penalization (dotted line) improves the results of FAUST, whereas for SCAPE its contribution is less apparent.

Figure 9. We plot the cumulative geodesic error for maps computed using various regularization settings. Our results indicate that the regularized problems yield better correspondences. See the text for additional details.

[width=]cmp_ajd_lr AJDOurs

Figure 10. We show above the first three basis functions designed using AJD (top row) and our approach (bottom row). Different from AJD, our energy does not favor matching basis elements which allows for a more flexible design process. See also Fig. 1.

7.3. Comparison with AJD (Kovnatsky et al., 2013)

Perhaps closest to our approach is the method that finds approximate joint diagonalized bases of Kovnatsky et al. (2013). In this work, the authors explore an optimization problem which is conceptually similar to ours. However, there are several key differences between our technique and theirs as we detail below.

In terms of the energy functional, our technique is fundamentally different from theirs. Their approach favors basis elements which diagonalize the LB operator, leading to smooth functions. However, the disadvantage in this point of view is that one implicitly assumes that smooth basis functions span the descriptors subspace. Unfortunately, many practical descriptors that are currently used in functional map pipelines do not fit into this assumption. Indeed, any high frequency signal such as segment information will undergo a low pass filter which may lead to data loss in practice, as we show in Fig. 3. In contrast, our method does not favor smooth basis elements and may output high frequency functions, see e.g., Fig. 2. Finally, even when we include Dirichlet terms in our minimization, they are weighted weakly.

Another significant difference is in the data fidelity term. The formulation in (Kovnatsky et al., 2013) and others (Litany et al., 2017) fixes the associated functional map to attain a particular structure. Namely, they include a term that takes the following form

(13)

which can be interpreted as setting to be . There are two disadvantages to formulation (13) which our approach overcomes. First, regularizing the functional map is not straightforward as in our formulation (11), and may lead to quartic expressions in the unknowns . Indeed, our formulation allows to independently constrain the bases or the functional map and its inverse, manifesting greater flexibility alongside the natural utilization of state-of-the-art regularizers. Second, our method allows for general functional map matrices and thus it increases the search space of solutions when compared with AJD frameworks.

To summarize, our approach generalizes AJD methods in that it combines work on joint diagonalization and functional map optimization in a unified framework. There are three key differences in our technique. First, we consider a much larger search space of solutions as we utilize the Proper Orthogonal Decomposition (POD) modes which are better suited to the given features, and we further allow for general functional map matrices. Second, since we jointly optimize for the functional map and the bases, we can naturally incorporate regularization terms. Finally, on the algorithmic side, AJD approaches facilitate a constrained minimization tool which is inefficient in practice as can be seen in Fig. 7 and its convergence is not guaranteed. In contrast, we analyze our approach and show that it is similar to a provably convergent problem.

[width=]cmp_ajd_tex GTAJDOurs

Figure 11. We compute functional maps and bases using AJD and our method, and we compare the results to the ground-truth (GT) via texture transfer.
Figure 12. Switching to POD-based design with AJD (blue dash line) yields an improvement over the LB subspaces (blue line). Still, our framework generates correspondences that are more accurate.

In addition to this qualitative comparison, we show in Figs. 10 and 1 the differences between the designed basis elements. Indeed, AJD (top row) produces highly consistent basis functions compared to ours (bottom row). However, we believe that this behavior limits the design process significantly, which may lead to less accurate matching results as can be seen in Figs. 5 and 16. Specifically, we select a pair of shapes from TOSCA and visualize the correspondence differences via texture transfer in Fig. 6. Overall, AJD produces reasonable results as compared to the ground-truth (GT). However, various parts of the shape such as head, legs and tail, display large errors. In contrast, our technique was able to accurately match most areas of the shapes including the challenging parts.

To conclude our qualitative comparison, we modify AJD to use POD modes in their design process instead of the LB eigenfunctions and we plot the cumulative geodesic error that was obtained for FAUST in Fig. 12. Indeed, switching to POD modes (blue dashed line) yields a large improvement compared to LB-based AJD (blue line). However, our method (red line) is still significantly more accurate, which can be attributed in part to the state-of-the-art regularization terms we include in our optimization.

Figure 13. Our technique is particularly suited to methods whose input is a functional map with its associated bases such as (Azencot et al., 2017). We demonstrate the consistent quadrangulations obtained by using their method with input generated by our approach.

7.4. Consistent quadrangulation and function transfer

The increasing interest in the functional map approach over the last few years lead to the development of techniques which can utilize a given functional map directly, without the need to convert it to a point-to-point map. For instance, (Azencot et al., 2017) proposed an optimization framework for designing consistent cross fields on a pair of shapes for the purpose of generating approximately consistent quadrangular remeshings of the input shapes. Our computed functional map and bases can be directly used within their method to produce quad meshes. In Fig. 13, we show an example of the remeshing results of two pairs of shapes having different connectivities (left and right) and genus (right). Still, we obtain highly consistent results as can be seen in the matching singularity points (red spheres). We provide an additional instance of this pipeline in Fig. 14 comparing the quadrangulation achieved with fixed LB bases (left) vs. our technique with designed POD modes (right). Indeed, we observe a much better alignment of isolines and singularity points with our approach compared to Fixed LB.

The last application we consider involves the transfer of scalar valued information between shapes. Recently, (Nogneng et al., 2018) showed that by extending the usual functional basis to include basis products, an improved function transfer can be performed. In Fig. 15, we utilize this pipeline using our functional map and bases to transfer an extremely challenging data given by a localized Gaussian function. Indeed the transfer is improved using the extended basis as the noise is less severe and the maximum is more localized.

8. Limitations

One limitation of our framework is related to the dependencies between the given constraints and our choice of dimensionality reducing subspaces . Indeed, one can always add the standard LB spectrum to these subspaces. However, we observe that in general, the results may change depending on the particular subspace in use and its size. For instance, while increasing allows for greater flexibility for representing scalar functions, it also requires more regularization, otherwise unwanted solutions may potentially become local minimizers. Another shortcoming of our approach is that it tends to produce maps that are less smooth compared to those generated with LB bases. This behavior is somewhat expected, as our bases are designed to potentially transfer high frequency information which in turn leads to less uniform correspondences. We leave further investigation of these aspects to future work.

[width=]cmp_qremesh_lr Fixed LBDesigned POD

Figure 14. We compare the quadrangulations produced by using fixed LB bases (left) and designed POD modes (right). Overall, there is a significant improvement in the designed case in terms of isolines alignment, singularity matches and adherence to curvature (see e.g., the index finger, left).

[width=]func_transfer Standard transferProduct transfer

Figure 15. Mapping Gaussian function between shapes with different connectivities is a challenging task, whose results may be exploited in context of shape matching to construct an accurate correspondence or to improve a given one.
Figure 16. We evaluate our approach in the challenging scenario of shapes with different connectivities and automatically generated descriptors. Our method exhibits very good error measures as compared to state-of-the-art approaches for computing functional maps.

9. Conclusions and Future Work

In this paper, we proposed a method for designing basis elements on a pair of triangle meshes along with an associated functional map. Unlike most existing work which utilize the spectrum of the Laplace–Beltrami operator, our technique adopts the Proper Orthogonal Decomposition (POD) modes to reduce the dimensionality of the problem. This choice introduces many degrees of freedom and it significantly extends the space of potential solutions. To effectively solve the problem, we incorporate state-of-the-art regularization terms which promote consistency, isometry and smoothness. Our optimization scheme is based on the Alternating Direction Method of Multipliers (ADMM) and it consists of easy-to-solve linear or Sylvester-type equations. We show that in practice our method behaves well in terms of convergence, and we additionally prove that a similar problem to ours is guaranteed to converge. We evaluate our machinery in the context of shape matching, function transfer and consistent quadrangulation, and we demonstrate that our results yield a significant improvement over state-of-the-art approaches for computing functional maps.

In the future, we would like to characterize the dependencies between the subspaces spanned by the bases to the given constraints and the relation to the functional map. Moreover, we believe that many applications may benefit from the proposed pipeline on a single shape. Namely, generate a self functional map with a set of basis elements defined on the shape. Examples include symmetry detection, fluid simulation and data interpolation, among many other possibilities. On the other hand, extending our framework to handle multiple shapes is an interesting direction as well. Finally, we believe that many of the questions that we consider in our work could benefit from the recent advancements in machine learning with deep neural networks. We plan to investigate how the task of designing a basis and a functional map can be solved using deep learning approaches.

References

  • Y. Aflalo, H. Brezis, A. Bruckstein, R. Kimmel, and N. Sochen (2016) Best bases for signal spaces. Comptes Rendus Mathematique 354 (12), pp. 1155–1167. Cited by: §1.
  • D. Anguelov, P. Srinivasan, H. Pang, D. Koller, S. Thrun, and J. Davis (2005) The correlated correspondence algorithm for unsupervised registration of nonrigid surfaces. In Advances in neural information processing systems, pp. 33–40. Cited by: §7.
  • M. Aubry, U. Schlickewei, and D. Cremers (2011) The wave kernel signature: a quantum mechanical approach to shape analysis. In Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, pp. 1626–1633. Cited by: Figure 2, §1, §3, §7.1.
  • O. Azencot, E. Corman, M. Ben-Chen, and M. Ovsjanikov (2017) Consistent functional cross field design for mesh quadrangulation. ACM Transactions on Graphics (TOG) 36 (4), pp. 92. Cited by: Figure 13, §7.4, §7.
  • O. Azencot, S. Weißmann, M. Ovsjanikov, M. Wardetzky, and M. Ben-Chen (2014) Functional fluids on surfaces. In Computer Graphics Forum, Vol. 33, pp. 237–246. Cited by: §2.
  • G. Berkooz, P. Holmes, and J. L. Lumley (1993) The proper orthogonal decomposition in the analysis of turbulent flows. Annual review of fluid mechanics 25 (1), pp. 539–575. Cited by: §3.
  • P. J. Besl and N. D. McKay (1992) Method for registration of 3-d shapes. In Sensor Fusion IV: Control Paradigms and Data Structures, Vol. 1611, pp. 586–607. Cited by: §1.
  • F. Bogo, J. Romero, M. Loper, and M. J. Black (2014) FAUST: dataset and evaluation for 3d mesh registration. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 3794–3801. Cited by: §7.
  • M. Botsch, L. Kobbelt, M. Pauly, P. Alliez, and B. Lévy (2010) Polygon mesh processing. AK Peters/CRC Press. Cited by: §3.
  • S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, et al. (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning 3 (1), pp. 1–122. Cited by: §6, 17.
  • A. M. Bronstein, M. M. Bronstein, and R. Kimmel (2008) Numerical geometry of non-rigid shapes. Springer Science & Business Media. Cited by: §7.
  • Q. Chen and V. Koltun (2015) Robust nonrigid registration by convex optimization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2039–2047. Cited by: §7.1.
  • D. Eynard, E. Rodola, K. Glashoff, and M. M. Bronstein (2016) Coupled functional maps. In 3D Vision (3DV), 2016 Fourth International Conference on, pp. 399–407. Cited by: §7.
  • D. Ezuz and M. Ben-Chen (2017) Deblurring and denoising of maps between shapes. In Computer Graphics Forum, Vol. 36, pp. 165–174. Cited by: §2, §7.1.
  • D. Gabay and B. Mercier (1975) A dual algorithm for the solution of non linear variational problems via finite element approximation. Institut de recherche d’informatique et d’automatique. Cited by: §4.
  • W. Gao, D. Goldfarb, and F. E. Curtis (2018) ADMM for multiaffine constrained optimization. arXiv preprint arXiv:1802.09592. Cited by: Appendix A, §5.5, §5.5, Proposition 1, Proposition 1.
  • D. Giorgi, S. Biasotti, and L. Paraboschi (2007) Shape retrieval contest 2007: watertight models track. SHREC competition 8 (7). Cited by: §7.
  • R. Glowinski and A. Marroco (1975) Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de dirichlet non linéaires. Revue française d’automatique, informatique, recherche opérationnelle. Analyse numérique 9 (R2), pp. 41–76. Cited by: §4.
  • G. Golub, S. Nash, and C. Van Loan (1979) A hessenberg-schur method for the problem ax+ xb= c. IEEE Transactions on Automatic Control 24 (6), pp. 909–913. Cited by: §5.1.
  • Q. Huang, F. Wang, and L. Guibas (2014) Functional map networks for analyzing and exploring large shape collections. ACM Transactions on Graphics (TOG) 33 (4), pp. 36. Cited by: §1, §2.
  • V. G. Kim, Y. Lipman, and T. Funkhouser (2011) Blended intrinsic maps. In ACM Transactions on Graphics (TOG), Vol. 30, pp. 79. Cited by: Figure 5, §7.1.
  • Y. Kleiman and M. Ovsjanikov (2018) Robust structure-based shape correspondence. In Computer Graphics Forum, Cited by: Figure 2, §7.1.
  • A. Kovnatsky, M. M. Bronstein, X. Bresson, and P. Vandergheynst (2015) Functional correspondence by matrix completion. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 905–914. Cited by: §1, §2.
  • A. Kovnatsky, M. M. Bronstein, A. M. Bronstein, K. Glashoff, and R. Kimmel (2013) Coupled quasi-harmonic bases. In Computer Graphics Forum, Vol. 32, pp. 439–448. Cited by: §1, §1, §2, §3, Figure 5, §6, §6, §7.1, §7.3, §7.3, §7.3, §7.
  • A. Kovnatsky, K. Glashoff, and M. M. Bronstein (2016) MADMM: a generic algorithm for non-smooth optimization on manifolds. In European Conference on Computer Vision, pp. 680–696. Cited by: §1, §2.
  • O. Litany, E. Rodolà, A. M. Bronstein, and M. M. Bronstein (2017) Fully spectral partial shape matching. In Computer Graphics Forum, Vol. 36, pp. 247–258. Cited by: §1, §2, §7.3.
  • D. Nogneng, S. Melzi, E. Rodolà, U. Castellani, M. Bronstein, and M. Ovsjanikov (2018) Improved functional mappings via product preservation. In Computer Graphics Forum, Vol. 37, pp. 179–190. Cited by: §1, §2, §7.4, §7.
  • D. Nogneng and M. Ovsjanikov (2017) Informative descriptor preservation via commutativity for shape matching. In Computer Graphics Forum, Vol. 36, pp. 259–267. Cited by: §1, §2, §5.6, §7.1, §7.
  • M. Ovsjanikov, M. Ben-Chen, J. Solomon, A. Butscher, and L. Guibas (2012) Functional maps: a flexible representation of maps between shapes. ACM Transactions on Graphics (TOG) 31 (4), pp. 30. Cited by: §1, §1, §1, §2, §2, §5.6, §7.1, §7.
  • M. Ovsjanikov, E. Corman, M. Bronstein, E. Rodolà, M. Ben-Chen, L. Guibas, F. Chazal, and A. Bronstein (2016) Computing and processing correspondences with functional maps. In SIGGRAPH ASIA 2016 Courses, pp. 9. Cited by: §2.
  • U. Pinkall and K. Polthier (1993) Computing discrete minimal surfaces and their conjugates. Experimental mathematics 2 (1), pp. 15–36. Cited by: §5.6.
  • J. Ren, A. Poulenard, P. Wonka, and M. Ovsjanikov (2018) Continuous and orientation-preserving correspondences via functional maps. arXiv preprint arXiv:1806.04455. Cited by: §2, Figure 5, §5.6, §6, §7.1, §7.1, §7.
  • M. Reuter, S. Biasotti, D. Giorgi, G. Patanè, and M. Spagnuolo (2009) Discrete laplace–beltrami operators for shape analysis and segmentation. Computers & Graphics 33 (3), pp. 381–390. Cited by: §1.
  • E. Rodolà, M. Moeller, and D. Cremers (2015) Point-wise map recovery and refinement from functional correspondence. arXiv preprint arXiv:1506.05603. Cited by: §2, §7.1.
  • R. M. Rustamov, M. Ovsjanikov, O. Azencot, M. Ben-Chen, F. Chazal, and L. Guibas (2013) Map-based exploration of intrinsic shape differences and variability. ACM Transactions on Graphics (TOG) 32 (4), pp. 72. Cited by: §2.
  • R. M. Rustamov (2007) Laplace-beltrami eigenfunctions for deformation invariant shape representation. In Proceedings of the fifth Eurographics symposium on Geometry processing, pp. 225–233. Cited by: §1.
  • S. C. Schonsheck, M. M. Bronstein, and R. Lai (2018) Nonisometric surface registration via conformal laplace-beltrami basis pursuit. arXiv preprint arXiv:1809.07399. Cited by: §2.
  • J. Solomon, R. Rustamov, L. Guibas, and A. Butscher (2014) Earth mover’s distances on discrete surfaces. ACM Transactions on Graphics (TOG) 33 (4), pp. 67. Cited by: §1.
  • Y. Wang, W. Yin, and J. Zeng (2018) Global convergence of ADMM in nonconvex nonsmooth optimization. Note: Journal of Scientific Computing, https://doi.org/10.1007/s10915-018-0757-z Cited by: §5.5.
  • D. Xiu (2010) Numerical methods for stochastic computations: a spectral method approach. Princeton university press. Cited by: §3.

Appendix A Proof of Proposition 1

We consider a modified version of our problem (2) given by

(14)
subject to

where and are blocks of variables. Further, the objective function is given by

Finally, the constraints are formed via

Proposition 1 ().

Under some mild conditions, problem (14) satisfies all the requirements in (Gao et al., 2018) and thus its ADMM converges.

Proof

We need to show that the requirements in Assumption 1 and Assumption 2 in (Gao et al., 2018) hold. Our variables are updated sequentially in the order and a single block of . We have that since the image of

is spanned by the identity matrix in each of the components. The objective function

is coercive on the feasible set since for every variable in the function behaves as . This also holds for the variables in because of the constraints. Moreover, the function is a strongly convex function because its Hessian is positive definite. Also, every subproblem in the ADMM of (14) is a trivial, linear or Sylvester-type equation and thus it attains its optimal value when is sufficiently large. Finally, our objective term is differentiable and, in particular, it is lower semi-continuous.