Mesh Draping: Parametrization-Free Neural Mesh Transfer

by   Amir Hertz, et al.
ETH Zurich
Tel Aviv University

Despite recent advances in geometric modeling, 3D mesh modeling still involves a considerable amount of manual labor by experts. In this paper, we introduce Mesh Draping: a neural method for transferring existing mesh structure from one shape to another. The method drapes the source mesh over the target geometry and at the same time seeks to preserve the carefully designed characteristics of the source mesh. At its core, our method deforms the source mesh using progressive positional encoding. We show that by leveraging gradually increasing frequencies to guide the neural optimization, we are able to achieve stable and high quality mesh transfer. Our approach is simple and requires little user guidance, compared to contemporary surface mapping techniques which rely on parametrization or careful manual tuning. Most importantly, Mesh Draping is a parameterization-free method, and thus applicable to a variety of target shape representations, including point clouds, polygon soups, and non-manifold meshes. We demonstrate that the transferred meshing remains faithful to the source mesh design characteristics, and at the same time fits the target geometry well.



page 1

page 4

page 6

page 7

page 8

page 9

page 10


3D Pose Transfer with Correspondence Learning and Mesh Refinement

3D pose transfer is one of the most challenging 3D generation tasks. It ...

MongeNet: Efficient Sampler for Geometric Deep Learning

Recent advances in geometric deep-learning introduce complex computation...

Modeling and hexahedral meshing of arterial networks from centerlines

Computational fluid dynamics (CFD) simulation provides valuable informat...

Neural Mesh Flow: 3D Manifold Mesh Generationvia Diffeomorphic Flows

Meshes are important representations of physical 3D entities in the virt...

SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization

Multilayer-perceptrons (MLP) are known to struggle with learning functio...

Juggling With Representations: On the Information Transfer Between Imagery, Point Clouds, and Meshes for Multi-Modal Semantics

The automatic semantic segmentation of the huge amount of acquired remot...

Foldover-free maps in 50 lines of code

Mapping a triangulated surface to 2D space (or a tetrahedral mesh to 3D ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Polygonal meshes are de facto the most common discrete representation of surfaces in computer graphics. For geometric modeling purposes, their expressiveness allows to capture adequate approximations of many object surfaces. At the same time, their modular design has sparked a plethora of interactive editing tools. That, in turn, makes them appealing to 3D artists for various creative tasks, such as modeling, sculpting, and rigging.

The flexible nature of meshes allows artists to conceive shapes of varying densities by diligently employing local operations. Detailed mesh areas, and in particular curved parts and features relevant for articulation, are usually carefully crafted with higher polygonal count and certain connectivity structures. For example, a mesh of a humanoid may contain both a detailed, dense polygon fan of an eye, as well as a low-poly, flat torso. Much of this delicate design involves meticulous repetitive work, oftentimes when jointly drafting multiple mesh models. Experts may resort to the usage of templates, but this is limited to certain use cases. A scanned point cloud, for instance, is not generated from such a template and therefore cannot directly inherit a desired mesh topology. To alleviate the hard labor of artists, a wealth of research efforts have been focusing on meshing and remeshing techniques, which, given a target polygonal shape, generate an alternative, coherent and enhanced mesh topology. This is usually achieved by optimization methods with various objectives that aim to optimize the polygon distribution while avoiding degenerate faces [campen2017partitioning].

While these methods are useful, they are inadequate at times, as they neither allow users full control over the resulting connectivity, nor directly support polygon soups and point cloud targets. As a consequence, they are missing the opportunity to re-use sophisticated meshes carefully crafted by experts.

In this paper, we introduce an alternative approach, namely, a method for transferring an existing, high quality reference mesh structure to a comparable target shape. Unlike previous methods that optimize both the connectivity and the geometry of the target shape mesh, our method reuses the source mesh characteristics while deforming it to best fit the geometry of the target. Our algorithm relies on a neural optimization process at its core: we learn the weights of a lightweight network to deform a given source mesh to the target shape. In Figure 1, we show a mesh transfer from a source surface left to three different target shapes. We follow contemporary literature, and overcome the shortcomings of simple neural nets by employing varying positional encodings. More specifically, we gradually map the source mesh vertex positions though encoding functionals of increasing frequencies  [hertz2021sape] from low to high, before feeding them into the network. This progressive scheme introduces an inductive bias of slowly changing global deformations. That allows us to maintain a stable optimization through the earlier, critical steps of the optimization. Towards the end of the optimization, it also allows us to fit delicate details, where higher frequencies pertaining to fine local features are mapped.

We propose Mesh Draping, a parameterization-free, neural approach for mesh transfer between shapes. Mesh Draping is an interactive tool, where the user merely guides the displacement procedure. That is, the source mesh is “draped” over the target geometry. Our optimizing technique yields high quality meshes, faithful to both the source design and the geometry of the target shape. Most importantly, it is not limited by the target shape representation, rendering our approach suitable for transfering mesh structures to an assortment of shape formats, such as point clouds, polygon soups, and nonmanifold meshes. We demonstrate results of transferring triangle and quad meshes on a variety of examples, ranging from 3D models generated by artists to synthetic objects comparable with other methods.

2. Related Work

2.1. Remeshing

The literature on meshing and remeshing is concerned with methods that directly generate new mesh structures of the desired shapes without assuming any prescribed connectivity. Oftentimes, a generated mesh of good quality ideally comprises of well-shaped triangles or quadrilaterals that are aligned to principal directions.

Different remeshing methods allow varying degrees of control by the user. Automated or semi-automated techniques may utilize high level inputs, such as density control and crease selection [alliez2002interactive; lai2008incremenral; fang2018quadrangulation; jakob2015instant]. Other methods allow further interactions with the user. [bommes2009mixed] let users specify orientation and alignment constraints. [campen2014dual] introduce dual strip weaving, a framework where the user can specify the main layout of the quad grid. [marcias2015data] learn quadrangulation patterns, applying them by guided user sketches of the edge flows. [ebke2016interactively] suggest a rapid feedback interface that enables more precise control by specifying constraints, such as the placement of singularities and their connections, as well as smooth surface regions, among others. The above methods generate high quality meshes, however, they do not offer users the option of employing existing, custom mesh structures.

Another class of remeshing methods deals with compatible meshing. Such methods remesh multiple shapes to generate common connectivity among them [yang2018volume; yang2020error]. These methods generally aim to produce meshes of regular connectivity and minimal polygon count that fit both shapes to facilitate applications like morphing or attribute mapping; however, they discard the original mesh connectivity, in contrast to the goal of our work.

For further reading about triangle and quadrilateral remeshing algorithms, we refer the curious reader to the comprehensive surveys by [alliez2008recent] and [campen2017partitioning].

2.2. Surface Mapping

Our work shares similarities with methods that map attributes between surfaces. In the following, we provide a brief summary of relevant surface mapping techniques; a more extensive discussion is available in the survey by [tam2013registration].

Broadly speaking, most works can be characterized into extrinsic approaches, which leverage surface deformation embedded within a 3D space, and intrinsic approaches, which achieve a complete bijective mapping between the two surfaces via parameterization to some intermediate domain.

In a purely extrinsic approach, the deformation process is preceded by global alignment defined by corresponding points on the source and target shapes [zhang2008deformation; li2008global]. Local transformations calculated per element depend on the global orientation of both shapes. Early approaches, such as the renowned iterative closest point method (ICP) [chen1992icp; besl1992icp], assume the scenario of rigid registration, that is, the transformation between the source and target shapes maintains pairwise point distances. Follow up works remove this restriction and perform non-rigid registration [sharf2006SnapPasteAI]

, possibly in a setting of partially overlapping surfaces with outliers

[li2008global; huang2009nonrigid; bouaziz2013sparse].

By contrast, in the intrinsic line of works, mapping is achieved by means of parameterization to an intermediate domain. The actual mapping is achieved using a composition of mappings from source shape to the intermediate domain and an inverse mapping from the intermediate domain to the target shape. Representative examples include [kraevoy2004cross; kim2011blended; aigerman2015seamless; aigerman2015orbifold; aigerman2016hyperbolic; baden2018mobius; schmidt2019distortion]. Many of these works aim for either continuous or bijective mappings, or both. To handle non-isometric cases, [mandad2017variance] suggest using an optimal transport plan without requiring an intermediate domain, geared towards as-conformal-as-possible mapping. A recent work by [schmidt2020inter] presents a novel method for continuous bijective mapping (homeomorphism) of attributes between surfaces of the same genus. Their method is able to obtain low intrinsic distortion, as well as generalize to arbitrary genus. [deng2020better] present a neural approach to reconstruct a target shape via an atlas of stitched patch mappings. Unlike our method, these methods rely on surface parameterization and cannot be easily extended to domains beyond manifold meshes, such as point clouds.

[ezuz2019elastic] propose a hybrid extrinsic approach that builds upon an intrinsic initialization. Their design combines an optimization scheme of elastic thin-shell deformation of the source mesh with projection over the target. They can handle non-isometric shapes, but struggle on highly complex meshes.

The problem of non-isometric mapping has also been studied from a spectral point of view. [ovsjanikov2012functional; ovsjanikov2017computing] propose using functional maps to transfer functions between two meshes using eigendecomposition. Newer neural variants of this method also exist [litany2017deepfunctional; ginzburg2020cyclicFM; donati2020deepgeometricfunctional]. These works create vertex-to-an-area mappings, rather than vertex-to-vertex mappings, which renders them less suitable for the purpose of mesh transfer.

The work of [tierny2011inspired] is closest to ours in spirit, as it also allows for direct quad mesh transfer between source and target meshes. In their work, they construct a corpus of source meshes, and use cross-parameterization with as-rigid-as-possible deformations to to automatically generate a mapping between the shapes. Their method assumes a parameterization of the target shape is available, as well as the existence of a homeomorphism. Their method requires a preprocessing phase for generating the aforementioned corpus, where boundary conditions are provided and segmentation masks are marked. Their method then requires to stitch the back the segments in a postprocess.

A common application for surface mapping methods is texture mapping, where the emphasis is on minimizing global distortion of the mapped textures. In contrast, our work is parameterization-free, global, and differs by focusing on correlative face transfer between similar areas of the two shapes, denoted by users, as well as maintaining the original mesh integrity. While mesh transfer can be implemented via a surface mapping method, we show in Section 4 that such methods produce inferior results in the extrinsic case, and cannot be used when a manifold mesh is not available and bijective parameterization cannot be directly achieved.

2.3. Shape Deformation

Applying unconstrained deformations to mesh vertices introduces a risk of geometric artifacts, such as intersecting faces. Our work shares common ground with shape deformation methods, and similarly to them, we cope with the challenge of maintaining the mesh coherency under geometric warping. To that end, many of the shape deformation methods are concerned with the type of regularization imposed on the mesh transformations, or proper design of interactive editing tools offered to the user. We highlight however, that the end goal of shape deformation works is somewhat different: they aim to preserve a single source shape under some set of user constraints. For an elaborate review of non-neural methods, we refer readers to [DeformationTutorial:2009], [jacobson2013algorithmsai] and the references therein.

To tie up the discussion, we mention works most relevant to our problem domain. [TemplateMeshFitting:2010] suggest deformation by template based fitting of 3D meshes, using a coarse to fine pipeline. [jacobson2011bounded] present a unified interactive approach combining control handles, skeletons and cages to deform a given shape. Their method uses weighted linear blending for practical purposes of mesh editing and rigging.

More recently, [yifan2020neural] demonstrated a learned variant of cage based deformation. Given a source and a target shape, their algorithm warps the source mesh to match the target. However, this work focuses on preserving local details within the source shape, and does not guarantee that the result intimately coincides with the target shape. Another contemporary work by [wang20193dn] performs neural warping of source to target shapes by concatenating their global representations and predicting per-vertex offsets. Their method relies on a compound loss that assumes symmetry, and is limited to the domain of the training set. The ShapeFlow method [jiang2020shapeflow] is a scheme for learning a deformation space. Given a target shape, the approach employs a nearest neighbor search for a pre-learned source candidate to guide the warp process. Our method, in contrast, does not assume any learned shape representation, and can be applied directly to novel shapes from unseen domains.


SourceTargetReLUPeriodicIn progressOff

Figure 2. Method Overview. Our pipeline beings with a manual step of marking sparse correspondence points between the source and the target. Then, source mesh deformation takes place, to obtain an initial solution for the mesh transfer neural optimization. After that, the optimization commences, until it finally converges to the resulting transferred mesh.

[width=tics=10, trim=0 40 0 0, clip]figures/03_loss_ablation-01.png


Area AnglesAngles + Area

Figure 3. Ablation of the structural loss component: (Eq. 3). Middle: considers only the area term, , which leads to large distortions with respect to the source mesh (blue). Next to the right, optimizing while minimizing the angle error term achieves better results. Right: combining both losses better preserves the tessellation of the source mesh.



No EncodingPEMesh Draping

Figure 4. Mesh transfer from source to target, comparing various neural optimization methods. No Encoding, which uses common MLPs with ReLU activations, fails to fit delicate target geometry details. Positional Encoding is able to fit fine details, but converges to a sub-optimal minima which introduces distortions. Progressive Positional Encoding endows a coarse-to-fine inductive bias, which contributes to stable optimization and and the ability to achieve high quality mesh transfer.

3. Method Overview

Our framework begins with a preliminary step, where the user specifies a small number of correspondence points on the source and target shapes. After that, the principal part of the algorithm, the optimization, takes place automatically, during which we allow the source mesh vertices to shift their positions to fit the target shape. We optimize an objective function that expresses the Euclidean distance between the source mesh and the target geometry, regulated by the marked user correspondence points. The objective function encapsulates complementary terms that concurrently fit the target shape and respect the structure of the source mesh. In a nutshell, the process iteratively projects the source mesh onto the target surface, and shifts the projected vertices to preserve face angles and local area entropy.

During the optimization phase, our method learns the parameters of a deep neural network that performs the pairwise mapping of source mesh vertex positions to offsets that fit their target shape locations. To facilitate the learning process, we introduce a progressive positional encoding layer

[hertz2021sape] at the head of the network. Simply put, a progressive encoding layer maps the input vertex positions to a higher dimensional space by feeding them through periodic functions with gradually increasing frequencies. During optimization, it progressively reveals higher frequencies of source vertices’ positional embedding as an input to the mapping network. We assert the claims of [tancik2020fourfeat; hertz2021sape] and demonstrate how the optimization benefits both from the stability introduced by spectral bias of the network [rahaman2019spectral] and its non-biased solutions pertaining to high frequency mappings [sitzmann2020implicit; mildenhall2020nerf].

In Section 3.1, we describe the preliminary setup of our method. In Section 3.2, we describe in detail the optimization terms of our mesh transfer solution. Finally, in Section 3.3, we lay out the progressive configuration which allows Mesh Draping to achieve stable optimization and high quality results.

3.1. Correspondence Setup

A key aspect of our method is that it provides the user means to guide the mesh transfer. First, the user marks a small set of correspondence points between the source mesh and the target shape

. The correspondence points enable the estimation of an initial global affine transformation from

to , followed by a biharmonic deformation [jacobson2011bounded] that utilizes the corresponding points on as the boundary conditions. In addition, the user can specify the correspondences as rigid, for example in Figure 8, the movement of the body parts is specified by rigid points. In those cases, we apply another as-rigid-as-possible deformation [sorkine2007rigid] using the rigid points as the boundary conditions. See Section 4.1 for elaborate implementation details.

The optimization phase starts when the user is satisfied with the initial deformation. The objective of the optimization is to bring the surface of close to while maintaining the structural properties of the original mesh, as described in Section 3.2.

Interactive Mode.

Mesh Draping also includes an interactive mode. Between the optimization epochs, the user may pause, and make additional local modifications by adding or adjusting the correspondence points.

3.2. Neural Optimization

Instead of formulating an explicit optimization term between the source mesh and the target, we use a neural network parameterization. Specifically, we use fully connected neural networks (FC) with parameters . Doing so has been shown to improve the optimization in various works [williams2019deep; Hanocka2020point] as the network serves as an ”internal prior” in the optimization.

The mesh deformation is obtained through a direct neural optimization of the parameters of a mapping function that receives a source mesh, , and outputs an optimized mesh .

The loss term follows directly from the definition of our problem:


On the one hand, we would like our output mesh to fit a given target shape as closely as possible, i.e., minimize the distance between and . On the other hand, we wish to preserve the structural quality of the source mesh, which is measured by .

The distance loss is given by


where is a symmetric Chamfer distance between uniformly sampled points on the optimized mesh and the target shape. In addition, we keep the correspondence points specified by the user close by minimizing the squared distance between them, where and are pairs of corresponding points on and , respectively.

The structural loss is given by


where the summation is over the vertices of . The first term represents the distortion of the angles on : the 1-ring of with respect to the original angles on

. The second term measures the local area Kullback–Leibler divergence where

are the fixed areas of the faces around vertex in , normalized to have sum . is the equivalent (non fixed) local area distribution in . Figure 3 shows the effect of each structural loss term on the final optimization result.

To prevent numerical issues caused by skinny faces, we utilize the quality measure for a triangular face from [liu2020neural]:


where is the area of the face and is the length of its -th edge. When , the face approaches to degenerate zero area. To prevent such cases we penalize by all the faces in with quality .

3.3. Progressive Positional Encoding

It has been shown that the learning bias of deep ReLU networks tends to be towards low frequency functions, e.g., they have a spectral bias [rahaman2019spectral]. The spectral bias of the network has a positive trait of preventing large deformations during the optimization process. These may be caused by an unstable Chamfer term, which leads to an erroneous local minimum. Unfortunately, the spectral bias also comes with a price: as the optimization proceeds, it prevents local delicate deformations that bring the surface of the source close to the the target shape. That implies that common FC network with ReLU activations have a hard time mapping a continuous domain to a high frequency image.

To mitigate the deficiencies of FC network with ReLU activations, previous works by [tancik2020fourfeat; mildenhall2020nerf; sitzmann2020implicit] suggested frequency based encodings. Specifically in a mesh transfer scenario, source mesh vertex positions are first mapped via positional encodings (PE) before feeding them as input to the FC network. However, in the case of mesh transfer, static frequency encoding schemes introduces a new problem to the architecture. Encoding functionals of high frequencies may overfit too quickly, causing the optimization to converge to suboptimal solutions. See for example, Figure 4, for the distortion caused by the positional encoding neural optimization.

Instead, Mesh Draping leverages the progressive positional encoding layer of [hertz2021sape]. Progressive positional encoding operates under the assumption that earlier iterations of the optimization benefit from the spectral bias, which is achieved by low frequency encodings. As the optimization converges, higher frequencies encodings are used to fit the delicate features of the shape. In the context of mesh transfer, this formulation enforces an inductive bias which ensures the optimization is both stable and accurate. In our experiments we adopt a lightweight version of [hertz2021sape]: we leverage progressive positional encodings, and trace their progression in a global manner, e.g: all vertices are exposed to higher frequencies at the same time-step.

[width=tics=10]figures/04_hadar-03.png Source


Elastic Correspondence


Mesh Draping

Figure 5. Comparison to direct surface mapping methods. Sparse correspondence points manually marked by users to guide algorithms are indicated by red dots. Face colors indicate semantic areas, showing how the mesh is stretched or contracted to fit the target shape. Mesh Draping is able to handle transfer of fine mesh structure in highly detailed areas.

[width=tics=10]figures/04_deformation_compare_b.png Source


Neural Cages


Mesh Draping

Figure 6. Comparison to deep deformation methods. The usage of progressive positional encoding allows Mesh Draping to deform the source mesh and remain faithful to delicate details of the target shape.




Figure 7. Applying Mesh Draping to transfer the structure of the mesh faces on left to the target meshes or point clouds on top. Our method supports direct mesh transfer to non-manifold shapes.
Figure 8. Mapping meshings of varying density, highlighted in different colors. Mesh Draping is able to fit the target shape while retaining the original mesh structure with minimal distortion (zoom in for details).

4. Evaluation

Unless specified otherwise, we use the same optimization configuration detailed in Section 4.1 for all described experiments.

4.1. Implementation Details

Correspondence Setup. For most results shown in this paper, our algorithm requires users to mark 8-15 pairs of correspondence points on average. This number may vary with the complexity of the models. As a good measure, up to 10 correspondence points is the amount used to generate all face figures within this paper and up to 20 for the human bodies in Figure 8. For processing the influence area of correspondence points, we use the implementation of [jacobson2011bounded] and [sorkine2007rigid] from libigl [libigl].

Network Architecture and Optimization. Our architecture consists of an FC network with hidden layers, of size , where the first layer is a progressive positional encoding [hertz2021sape] layer divided into blocks. Our optimization runs for

iterations, alternating between backpropagations on

and , where the weight is set to for the first iterations and later lowered to for the rest of the optimization.

Latency. On a machine equipped with Nvidia GTX 1080, the optimization takes up to 45 seconds to converge for iterations.

4.2. Evaluation Metrics

Our quantitative and qualitative evaluations highlight the importance of jointly optimizing a source mesh distortion metric, as well as the alignment with the target shape. We measure the quality of distortion of with respect to the source mesh by discrete Dirichlet energy [pinkall1993computing]. The alignment integrity of the optimized mesh over the target shape is evaluated using Chamfer and Hausdorff distances. It should be noted that the distortion and alignment metrics are complementary to one another, i.e, optimizing one but not the other provides inferior results as either the source structure is distorted, or the result does not perfectly align with the target shape. Readers may confirm the last statement by cross-comparing the visualizations of Figure 5, Figure 6 and Figure 9 with the quantitative evaluation in Table 1.

The aforementioned metrics are staple, and are commonly used on an individual basis. However, when measured concurrently for mesh transfer quality assessment, one has to account for their scale differences, their specific ranges, and how to effectively quantify them into one, comparable measurement. To that end, we propose a novel evaluation metric,

, which combines source distortion and target alignment measurements to a single score.

We define using the following template function:


Here, the notations follow Section 3.2, where , and represent the source mesh, the target shape and the transferred mesh, respectively.

and are respectively the source distortion and target alignment measure functions, which are chosen in this paper as the Dirichlet energy and Hausdorff distance defined as


where for the discrete Dirichlet energy, we follow the definition of [ezuz2019reversible] and assume

is the linear transformation that maps face

to its image in , and is the face area. To map the Dirichlet energy to the range of , we also subtract a single unit. To align the magnitude of the Hausdorff distance, we scale it by a constant , which we set empirically.

The calibration hyper-parameter is used to tone down the scale of the metric to better distinguish how different methods compare to each other. Throughout this paper, we use with . In practice, prior to metrics computation, all 3D models are normalized to the unit cube to ensure scale invariance.

To conclude, exhibits the following attributes:

  1. , where a higher score correlates with better quality.

  2. A perfect score of 1.0 is obtainable when (no distortion occurs) and (optimized shape perfectly aligns with target).

  3. When or , then .

  4. A lower yields a “harder” evaluation metric, which penalizes to a greater extent both misalignment and high distortion.

To tie up the discussion of metrics, we acknowledge that one of the terms in our evaluation metrics, the Chamfer distance, is directly used as one of the optimization terms. We emphasize this choice does not impact the integrity of the evaluation due to the optimization objective encompassing additional terms, and our evaluation consisting of other metrics as well.

Evaluation Alignment Distortion Joint
Metric Chamf. () Hausd. () Dirichlet () ()
Elastic Corresp. 1.97 4.23 1.52 0.690
Inter-Surface 1.39 3.14 2.29 0.770
Ours 1.18 1.91 1.84 0.900
HOT 13.78 4.57 12.76 0.327
RHM 8.78 3.49 6.54 0.534
Ours 8.34 2.22 3.29 0.791
Neural Cages 38.68 11.16 1.1 0.361
ShapeFlow 12.81 8.77 4.8 0.407
Ours 1.84 1.74 1.6 0.933
Table 1. Quantitative comparison of our approach to other surface mapping and deformation methods, on the task of mesh transferring. We measure the mean Chamfer distance, Hausdorff distance, Dirichlet energy, and our proposed metric over the shapes shown in Figures 5, 6 and 9.

[width=tics=10, trim=0 0 0 0,clip]figures/04_shrec_d.png





Mesh Draping

Figure 9. Comparison to additional surface mapping methods on the SHREC-BIM benchmark. In this comparison, correspondence points are provided with the data and not by manual user interactions. Red dots indicate correspondence points used by each method.

4.3. Comparisons

We compare our method to other methods that we found as the most relevant to the task of mesh transfer. The quantitative results are summarized in Table 1.

SHREC-BIM benchmark. We evaluate our method on the BIM benchmark [kim2011blended] which includes 200 pairs of meshes from SHREC dataset [giorgi2007shape]. Each pair is supplied with with corresponding points (between 2-36 points). We compare our method with two parameterization based methods: Hyperbolic Orbifold Tutte Embeddings [aigerman2016hyperbolic] (HOT) and Reversible Harmonic Maps between Discrete Surfaces [ezuz2019reversible] (RHM). Both methods receive as input the corresponding points and output a parametrization between the meshes that maps the vertices of one mesh to barycentric coordinates on the another mesh. As can been seen in Figure 9, the parametrization methods may struggle to keep on the triangulation of the source mesh. For example, see artifacts RHM outputs such as flipping faces on the fish tail and humanoid hand.

Additional mapping methods. We compare our method to additional two surface mapping methods on our custom test set. The first is Elastic Correspondence (ELC) [ezuz2019elastic] using the initialization scheme of Aigerman et al. [aigerman2016hyperbolic], and the second is Inter-Surface Maps due to (ISM) [schmidt2020inter]. Both methods are based on parameterization. Consequentially, they are only applicable for pairs of meshes, and also assume the existence of some bijectivity between the surfaces.

The comparisons are shown in Figure 5 where the input to the three methods are pairs of source and target meshes with their marked corresponding points. As evident by careful observation of the corresponding segmented parts marked on the source and the optimized mesh, ELC causes semantic distortion, for example, on the top face ears or the cow mouth. Both ISM and our method preserve the semantic correspondence between the meshes. This observations are also reflected in the quantitative results. Due to some distortions in the mapping of ISM, our method achieves slightly better results.

It is important to mention, that both methods, ISM and ELC, satisfy a complete bijective map between the input pair of meshes. However, an error evolves in places where the new discrete tessellation of the source does not properly cover the target when projecting the map onto the target mesh. We suspect that these kind of errors appear mostly at delicate places, i.e, at the eyes of the topmost face and the cow mouth (Figure 5). In comparison, our direct optimization avoids such types of distortion.

Comparison to deformation methods. We also compare our method to two deformation methods. Both methods are based on a pre-training step over some specific dataset. Those methods can be categorized by the priors they inject to the deformation allowed between the source to the target shapes. The first method, Neural Cages (NC) [yifan2020neural], allows only coarse deformations by displacing the control points of a cage that wraps the source mesh. The second method, ShapeFlow (SF) [jiang2020shapeflow]

, enables a more flexible deformation by allowing displacement for all vertices of the source mesh. Their deformation is regularized through the loss function that includes a volume conservation term and a penalization term for a non isometric deformation.

The results are shown in Figure 6, where we trained and tested each method on selected classes from the ShapeNet dataset [chang2015shapenet]. We apply our method without providing correspondence points, , e.g: we set in Eq. 2. In other words, we assume only a global orientation alignment between the source and the target meshes, as both other methods assume. Both qualitative and quantitative comparisons clearly highlight the difference between the methods. Our proposed progressive optimization better aligns the target mesh with the target shape, doing so with few distortions. By utilizing only global deformations, NC minimizes the distortion term but the resulted mesh is unable to satisfactorily obtain the form of the target mesh. Finally, SF brings the source mesh closer to the form of the target shape, but the resulting meshes usually suffer from undesired distortions.

Polygon soups and point clouds targets. Finally, we highlight the strengths of our progressive optimization by presenting visual examples of mesh-transfer in non-trivial scenarios. In Figure 7, we demonstrate a transfer of face meshes to point clouds, a task which surface mapping methods relying on bijective mappings cannot directly perform. The quality of our results are on par with the mesh-to-mesh case. This setting pertains to a real-life application, where experts may reuse predefined meshes on recently scanned objects.

Varying density meshes. In Figure 8, we illustrate the example of transferring meshes of varying density. For clarity, different areas of interest are segmented by color. Though we suggest a global approach, it is able to transfer both sparsely and densely detailed polygonal areas with minimal distortion, maintaining the original intention of the expert artist. This example alludes to the case where specific parts of the mesh are highly detailed for a particular purpose, such as animation.



Mesh Draping

Figure 10. Limitation example: mesh transfer from the dinosaur mesh to the giraffe shape yields sub-optimal results unfaithful to the original target shape. Due to large shape differences, the tessellation of the dinosaur inherently struggles to accurately fit the giraffe ears or the delicate structure of its feet.

Limitations. While Mesh Draping exhibits very good results in many cases, there are still some things to improve in it. First and foremost, Mesh Draping is intended for pairs of shapes of similar parts. For example - mesh transfer will succeed between different four legged animals (Figure 1), but will under-perform when transferring meshing from humans to animals, or from a dinosaur to a giraffe as shown in Figure 10. Here, the horns shape of the giraffe requires special meshing, which does not exist on the dino head model. This phenomenon is also demonstrated in Figure 6, row 4: the target truck shape contains a deep depression right above the right-front wheel, whereas the source shape is smooth. As a result, part of the resulting wheel mesh appears to be deformed. In addition, on extreme cases of sharp edges (so called ”spikes”), the method may struggle to achieve perfect alignment between the optimized mesh and the target shape. To maintain high quality results on very detailed meshes, users may be required to define additional correspondence points. Since we do not use a pre-defined train set, in such cases user intervention is necessary.

By design, our method does not alter local connectivity at all. However, where needed, one may perform local subdivision or remeshing as a postprocess. Finally, the latency measures in this paper, are presented for an unoptimized implementation. Subsequent efforts may reduce the runtime cost (detailed in Section 4.1) by further optimizing the Mesh Draping logic.

5. Conclusion and Future Work

We presented a parameterization-free approach for transferring mesh structure to comparable shapes. The proposed method uses neural optimization with progressive frequency positional encodings, which contribute to both stable optimization and high quality fitting to fine details. We demonstrated the applicability of our method on a range of examples, including the re-use of triangular and quad meshes to target meshes and point clouds.

Future work may leverage an unpaired training set of shapes to obtain priors on common shapes for faster and more robust optimization. Another direction for future work may improve the computation time of our optimization, or its flexibility by allowing users to guide mappings between different topologies, or jointly deform a shape from several sources.