Generative shape models are tremendously useful in computational anatomy (shape representation, population analysis), medical imaging and computer vision (segmentation, tracking), computer graphics and beyond. Most approaches to statistical shape analysis fundamentally rely on registration, from landmark based representations and active shape models[4, 1, 25], to medial representations  and Principal Geodesic Analysis , to deformable registration and diffeomorphometry [8, 33]. Registration is known to be a source of bias in shape analysis, but is often a necessary ‘evil’ because input data does not come pre-aligned in a common reference frame (or pose). In contrast, the shape information of interest is often invariant to the object pose. Our main motivation is to investigate geometric tools that can open the way to learned, pose-invariant generative shape models (specifically, curves and D surfaces). The key insight is that mean curvature is pose-invariant and generally characterizes the shape losslessly. This work investigates spin transformations as the algorithmic tool to computationally implement this insight. The cornerstone of the framework lies in a gracefully simple equation that relates a spin transformation (one quaternion per face in the mesh) to a change in the mean curvature via a first-order differential operator :
Typically, a desired change of curvature is specified via , yielding a transformation from which a new shape can be constructed. The present work demonstrates this concept and shows its applicability to manipulate (flow and extrude) closed shapes in a stable manner across large deformations. Section 3 reviews the discrete geometric setting, i.e. (i) the geometric objects to which the framework applies, (ii) discrete mean curvature, (iii) background on quaternions as similarity transformations in . Section 4 introduces discrete spin transformations. Within the framework of spin transformations, the task of flowing a mesh onto a reference shape and that of extruding a shape back from the reference are highly symmetric: both rely on the ability to compute a transformation based on prescribed curvature and area changes. Section 5 gives an overview of the proposed procedure. Section 6 discusses applications and results.
2 Related work
2.0.1 Pose-invariant shape analysis.
, built from the spectrum and eigenfunctions of the Laplace(–Beltrami) operator, have achieved popularity in this context, spanning a variety of applicationse.g., object retrieval , shape dissimilarity quantification , analysis of anatomical structures [26, 10, 31], transfer of structural and functional data [27, 20]. Spectral representations pose two challenges: firstly, going back from the spectral descriptor to the corresponding shape is difficult; and secondly, they tend to discard fine-grained, local information in favor of global shape properties and symmetries. To supplement the intrinsic Laplace–Beltrami operator, the extrinsic Dirac operator 
, which carries more information about the shape immersion, has recently been investigated for shape analysis. Geometric deep learning provides the toolset to analyze functions over a fixed graph. It remains unclear how to analyze graphs themselves. The present work contributes with a lossless and invertible mechanism for turning a mesh into a function (the curvature) over a reference template (say, a sphere).
2.0.2 Shape flows, large deformations, conformal maps.
Mean curvature flow is the archetypal algorithm for fairing, in part due to its simplicity and intuitive appeal. However mesh quality tends to degrade quickly throughout the flow, requiring tedious monitoring and remeshing to reduce artefacts and prevent singularities . Furthermore it is not suitable for mesh extrusion. Conformal maps are often perceived as the gold standard in such contexts, and spin transformations originate from this perspective [5, 6]. Several discretized and discrete quasi-conformal frameworks have been proposed (e.g., [22, 18]) on top of an incredibly rich body of theoretical work. Conformal maps have found a natural application in the context of brain mapping [13, 11] by mapping the cortical surface to a reference domain. Rather than strictly on conformality, our focus here is on a parametrization of large deformations that (1) works from the shape invariant mean curvature (2) allows to efficiently flow between any shape and a reference. It is more generally related in spirit to large diffeomorphic frameworks [30, 21]
that can flow a shape from a template and (pose-equivariant) vector field. Our work expands on the framework ofdiscrete spin transformations as introduced by Ye et al. . One of the appeals of a discrete framework is to bypass discretization errors by design and to offer a consistent definition of discrete geometric concepts such as curvature. We introduce the framework to the community and contribute (i) with an optimization strategy that gives finer-grained control over deformations; (ii) by deriving constraints within this formulation for integrability on general topologies, and area preservation; (iii) by exploring its potency for mesh extrusion.
3 Discrete Geometric Setting
Face edge-constraint nets.
Our work focuses on the case of closed compact orientable surfaces in and follows the discrete geometric setting introduced by Ye et al. . Surfaces are discretized as face edge-constraint nets, generic constructs that encompass but are not restricted to standard triangulated meshes. Let denote the net combinatorics, resp. its vertices, faces and edges. Adjacent faces meet along a single edge. Edges are shared by exactly two adjacent faces. Faces can be arbitrary polygons (such as with simplex meshes ). In addition, let each face be assigned a unit normal , such that for any two adjacent faces and joined along edge (), the normals satisfy the looser condition . is called a face edge-constraint net. For instance standard triangulations with normals orthogonal to faces are face edge-constraint nets.
Discrete mean curvature.
Let be a net. is orientable and, without loss of generality, directed edges are traversed in the direction towards which they point when cycling over vertices of face . This lets us orient the dihedral angle between planes and . For standard triangulations, is just the bending angle between faces. The integrated mean curvature on edge (Fig. 2(c)) is defined as . The integrated mean curvature on face is the sum of its integrated edge curvatures: . The discrete mean curvature follows by turning into a density over the face. With this, the discrete mean curvature satisfies a discrete counterpart to Steiner’s formula. Steiner’s formula is a characterization of mean curvature that relates it to the relative change of area when offsetting the surface in the normal direction by a distance (replace by an infinitesimal area element in Eq. (2) for the original formula):
Geometry in the quaternions.
Quaternions provide a natural algebraic language for geometry in , much like complex numbers for planar geometry. Let denote a basis for . Elements are identified with pure imaginary quaternions , so that surfaces are naturally immersed in . Denote by the quaternionic conjugate of . The norm of is defined as the square root of . All admit an inverse . Again like complex numbers, quaternions admit a polar decomposition with
a unit vector, which makes their geometric meaning more explicit. Indeed,, also known as conjugation by , expresses rotation around by an angle . In the same vein, the expression conveniently expresses a similarity transformation: corresponds to rotated around by and rescaled by .
Every edge in the net is associated with a quaternion dubbed hyperedge, with real part equal to the integrated mean curvature at the edge, and imaginary part equal to the embedding of the edge:
Hyperedges are the fundamental structure on which discrete spin transformations act. They summarize all the geometric information that, along with the mesh combinatorics, allows to reconstruct the discrete surface immersion (Appendix 0.A). With that, spin transformations are introduced in a straightforward manner.
4 Discrete Spin Transformations
Discrete spin transformations.
A discrete spin transformation associates a single quaternion with each face of a face edge-constraint net. The transformation acts on hyperedges and face normals as follows:
The elegance of the construct lies in the fact that Eq. (4) does transform a face edge-constraint net into another edge-constraint net. This is easily checked (cf. ), with the main elements of the proof stemming from the geometric interpretation of hyperedges (Appendix 0.A) and from the constraint on face normals. Furthermore, discrete spin transformations are trivially invertible: . The integrability condition that each face in the new net closes () is equivalent to the existence of a real valued function over faces such that:
Equation (5) is the cornerstone of the framework. is henceforth referred to as the intrinsic Dirac operator. sends a quaternionic function over faces to another one such that . Left multiplying both sides by , the closedness constraint on faces is immediately apparent: must be real-valued. For ease of exposition, the expression in the introduction is formulated using a slightly different yet immediately related operator, the extrinsic Dirac operator . It also discards the normalization by as in . The proposed normalization however mirrors more faithfully the smooth counterpart of the present setting (see e.g. ).
The intrinsic Dirac operator creates an explicit relationship between a spin transformation and the discrete (resp. integrated) mean curvature (resp. ) of the new net, namely as long as the new net closes. Coupling with Eq. (5),
When is the identity transform, . For smooth and from Eq. (4), . In other words, jointly describes the mean curvature and length element. This quantity is precisely known as the mean curvature half-density in the smooth setting, and is generally in one-to-one correspondence with a given shape. Finally, with the extrinsic Dirac operator, the corresponding describes a change in half-density instead: .
Dirac operators and have a number of properties that make them appealing for various tasks in shape analysis. and are self-adjoint operators. Dirac operators relate to square roots of the Laplace–Beltrami operator . Whereas captures the intrinsic manifold geometry and is invariant by isometry, the Dirac operators can disambiguate much more about the surface immersion into . We refer the reader to [19, 32] for a discussion from this perspective. The eigenvectors of Dirac operators all satisfy Eq. (5) and thus provide new immersions of the abstract manifold into (new transformed ). The first (null) eigenvector of is trivial.
cannot have a null eigenvalue for closed surfaces (e.g. spherical topology) of practical interest in the present work, since that would result in a minimal closed surface with everywhere zero mean curvature. The smallest eigenvector of provides a generally non-trivial immersion with higher smoothness than the original shape (lower Willmore energy ). Ye et al.  explore this mechanism for the purpose of surface fairing. The next leading eigenvectors give some geometric insight into (Fig. 3). In this work however, we investigate a strategy closely related to  with a fine-grained control over the surface deformations.
Quaternions admit representations as real matrices (Eq. (7)), so that standard linear algebra libraries can be used to solve quaternionic linear systems. In particular, , thus Hermitian (quaternionic) forms are represented by real symmetric matrices. We denote real vectors and matrix representations below with upright bold symbols.
The scalar function introduced in section 4
provides the primary degrees of freedom for mesh manipulation, and it tightly relates to mean curvature. Of course only a subset of functionscan be associated with some such that the integrability condition Eq. (5) is satisfied. Namely, should have a null eigenvalue. This leads Crane et al.  to solve for the smallest eigenvalue and eigenvector , yielding a solution of Eq. (5) up to a small constant shift: . We propose instead to formulate the objective as a minimization problem. This gives fine-grained control to add specifications (e.g. smoothness, area distortion), many of which can be efficiently expressed as linear(ized) constraints or quadratic regularizers, within a unified formulation. Thus finding amounts to solving a quadratic problem:
under a set of linear constraints on . is a diagonal matrix of face areas. In practice we set to , where is an integrated Laplacian over faces. The eigensystem actually solved in  closely relates to the simplest case where there are no constraints and .
Overall, the procedure is as follows: prescribe a scalar function for a target shape or curvature change (sec. 6); then solve for the spin transformation (Eq. (8)); finally compute new hyperedges (Eq. (4)) and solve a linear system for the new vertex coordinates (Eq. (9)). The steps are typically iterated over, resulting in a flow.
5.0.2 Computing the new immersion.
Let transformed edges be indexed by their start and end vertices . Vertex coordinates satisfy . In practice, we solve the mathematically equivalent (Appendix 0.B) linear system
where and are the standard discrete (cotangent) mesh Laplacian and divergence operators . This method of integration is robust to numerical errors. The Laplacian and divergence are computed w.r.t. either the source () or target () mesh metric (with empirically identical results). A benefit of working from a discrete setting is that no discretization error is introduced from to the corresponding .
5.0.3 Geometrically constrained flows.
The proposed formulation (Eq. (8)) enables fine-grained control over the flow by prescribing additional constraints. For instance, the method extends to topologies beyond spherical by adding an exactness constraint (Appendix 0.C). The mapping can also be encouraged to preserve angles (i.e. conformality) and minimize area distortion. Conformality is key in preserving mesh quality across exceptionally large deformations, which prevents considerable loss of numerical stability. It is intuitively described as circles being locally transformed into circles, or indeed texture-preserving (Fig. 1). Quasi-conformality is inherent to the present framework. From Eq. (4), the relative length of edges is preserved as soon as varies smoothly across faces. On the other hand large area distortion can be introduced, particularly in regions of high curvature. In some applications, we may prefer to trade off some distortion of angles for a better preservation of areas. We note again from Eq. (4) that the magnitude of the spin transformation relates to the local change of area. Thus scale changes (up to global rescaling) can be penalized via a linearized soft constraint over (Appendix 0.D).
5.0.4 Filtering in curvature space.
As described in  in a related setting, the flow of the spin transformation can also be altered by directly manipulating . The rate of change for geometric features of various scales can be tweaked by manipulating its frequency spectrum. Moreover some constraints can be efficiently enforced by orthogonal projection of onto a linear subspace. In particular, Appendix 0.C derives alternative integrability conditions in the form of simple linear constraints on , for the proposed discrete geometric framework.
This section showcases the approach on a collection of structured meshes of subcortical structures from the UK Biobank database . The typical mesh size is of a few thousand nodes (up to k). The framework was implemented in numpy. The tool mostly relies on efficient (sparse) linear algebra. Experiments were run on a standard laptop (i7-8550U CPU @ 1.80GHz).
6.0.1 Surface Fairing.
Surface fairing is the process of producing successively smoother approximations of a mesh geometry . Most algorithms proceed by minimizing a fairing energy, such as the membrane energy or the Willmore functional . Recalling that and ignoring the dependence of on , gradient descent on (resp. ) yields (). The former yields the widespread mean curvature flow that iteratively evolves points along the surface normal with a magnitude proportional to the mean curvature . Crane et al.  first suggested in the context of spin transformations to optimize directly w.r.t. , yielding the simple flow in curvature space. A benefit of the approach is to decouple time and spatial integration, yielding numerically stable solutions across large time steps. We follow the same strategy. The prescribed change of curvature is then (optionally filtered and) integrated into a new surface immersion , by computing the corresponding spin transformation as per section 5. Specifically, for a given target curvature (say ) and area , we let (section 4). The standard unconstrained optimization (Eq. (8)) regularized with the face Laplacian (or one of its powers) yields quasi-conformal transformations (Fig. 1). Large steps – typically remain stable. Whether is numerically integrable can be checked by monitoring the discrepancy between edges integrated as per Eq. (4), and edges recomputed from (after getting from Eq. (9)). The closedness generally holds within a few percent across several large steps without an explicit constraint, and within with an explicit constraint (Appendix 0.B). A trade-off between conformality and area distortion is achieved by weighing in a soft constraint on the square norm of the logarithmic area distortion (Fig. 4).
6.0.2 Comparison to Mean Curvature Flow.
The procedure is compared with an incompressible mean curvature flow. Incompressibility is enforced to make the flow less prone to develop singularities, by adding a balloon energy , where is the average mean curvature. Two metrics of interest, defined over the mesh surface, are the conformality error and the logarithmic area distortion (after normalising to the same total area). The quality factor measures how close-to-conformal a transformation is, as the ratio of the largest to smallest eigenvalues of the Jacobian of the mapping from to . For a conformal deformation, is identically throughout the mesh. However the area distortion may become significant. Fig. 4 exemplifies the general observation that the mean curvature flow realises a suboptimal trade-off between angle and area preservation. As expected, unconstrained discrete spin transformations are quasi-conformal. Unavoidable area distortion is introduced but mesh elements retain their original quality (right column, top and middle). To contrast, the mean curvature flow arbitrarily destroys the mesh quality, angle and area ratios in regions of high curvature. Area constrained discrete spin transformations implement a sensible compromise, whereby (i) area distortion is lessened; (ii) numerical stability is preserved; (iii) the conformal error increases rather uniformly over the entire mesh, leading to a graceful, slower loss of mesh quality. For surface fairing to a sphere, averaged over a random subset of meshes in the dataset and taking the maximum over the mesh surface, we get the following – mean curvature flow: , ; unconstrained spin transformation: , ; area constrained: , . For the area constrained spin transform, the maximum area discrepancy simply reflects a user-specified soft target.
6.0.3 Mesh Extrusion.
The task is now to reconstruct (“extrude”) a shape of interest back from a reference sphere, given its mean curvature and area mapped onto the sphere surface. There is to our knowledge very little done in that direction, even in related work [6, 32]. To emphasize, we only wish to recover the original mesh up to pose and scale. Encoding scale presents little difficulty, and shape is invariant under changes of pose. To evaluate the reconstruction accuracy, we rigidly align and rescale the extruded shape to the original one, and compute the maximum distance from points on the extruded mesh to the original surface. The strategy for extrusion closely mirrors that of mesh fairing, whereby we get from , and set
. As a preliminary comment, note that the degree of challenge regarding mesh extrusion critically depends on the exact experimental setting and goal, as contrasted in the two following settings. The first experiment aims to estimate the accuracy that can be reached in the best scenario (somewhat upper bounded by the registration error). We take a collection of
subcortical meshes from the UK Biobank (incl. brain stems, caudate, putamen, accumbens, amygdala, hippocampus, thalamus, palladium) and flow them onto the unit sphere. We do not perform remeshing, only interpolating relevant maps to nodes and back to faces. We then directly reconstruct the mesh as described above. On average over the dataset, the maximum point-to-surface error is ofmm. The distribution of error is widely spread over different structures, the most challenging being caudates () and hippocampi (); and the least ones being the accumbens, amygdala, palladium and thalamus (–). This matches our expectations, given that caudates and hippocampi are in fact highly non spherical. Thus very significant area or angle distortion is introduced when mapping onto the sphere. The second experiment investigates a more challenging setup, whereby the flowed surface is remapped onto a reference sphere with uniform meshing. Shape-specific vertex density as well as face aspect ratio, which reflect the area and angle distortion introduced during the fairing, are thus discarded. We experiment with a set of brain stems (Fig. 5), which represent a happy medium between the most challenging and trivial structures, with a maximum reconstruction error of mm (–%). For the most challenging structures, various strategies to guide the reconstruction using either additional information obtained during the flow, or multiscale approaches with hierarchical encoding could be considered. This is left to explore in future work.
We have presented a method to manipulate surface meshes across very large deformations by prescribing mean curvature (half-density). The framework is well suited for mesh fairing and extrusion, e.g. to map shapes to, or back from a unit sphere. As a perspective, we believe the approach to have potential for pose-invariant shape analysis, specifically for generative modeling. Indeed mean curvature together with the metric generally is in one-to-one correspondence with the (closed) shape; this is in particular true for a spherical topology. We have shown how spin transformations computationally implement this insight. Therefore the shape geometry could be losslessly encoded as a scalar function on a template, making the modeling task more amenable to learning. In the smooth setting, spin transformations are a subgroup of conformal maps. This partly explains their numerical stability across large flow steps, a property inherited in the discrete setting. However, conformal maps can introduce significant area distortion, e.g. when flowing highly curved objects. An advantage of discrete spin transformations is to relax exact conformality, and allow the user to trade off angle for area preservation.
This work is supported by the EPSRC (grant ref no. EP/P023509/1) and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 757173, project MIRA, ERC-2017-STG). DC is also supported by CAPES, Ministry of Education, Brazil (BEX 1500/15-05). KK is supported by the President’s PhD Scholarship of Imperial College London. IW is supported by the Natural Environment Research Council (NERC).
-  Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Trans. Pattern. Anal. Mach. Intell. 24(4), 509–522 (2002)
-  Bronstein, A.M., Bronstein, M.M., Guibas, L.J., Ovsjanikov, M.: Shape Google: Geometric words and expressions for invariant shape retrieval. ACM Trans Graph 30(1), 1 (2011)
-  Bronstein, M.M., Bruna, J., LeCun, Y., Szlam, A., Vandergheynst, P.: Geometric deep learning: going beyond Euclidean data. IEEE Signal Processing Magazine 34(4), 18–42 (2017)
-  Cootes, T.F., Taylor, C.J., Cooper, D.H., Graham, J.: Active Shape Models – Their training and application. Comput Vis Image Underst 61(1), 38–59 (1995)
-  Crane, K., Pinkall, U., Schröder, P.: Spin transformations of discrete surfaces. In: ACM Trans Graph. vol. 30, p. 104. ACM (2011)
-  Crane, K., Pinkall, U., Schröder, P.: Robust fairing via conformal curvature flow. ACM Trans Graph 32(4), 61 (2013)
-  Delingette, H.: General object reconstruction based on simplex meshes. IJCV 32(2) (1999)
-  Durrleman, S., Prastawa, M., Charon, N., Korenberg, J.R., Joshi, S., Gerig, G., Trouvé, A.: Morphometry of anatomical shape complexes with dense deformations and sparse parameters. NeuroImage 101, 35–49 (2014)
-  Fletcher, P.T., Lu, C., Pizer, S.M., Joshi, S.: Principal Geodesic Analysis for the study of nonlinear statistics of shape. IEEE TMI 23(8), 995–1005 (2004)
-  Germanaud, D., Lefèvre, J., Toro, R., Fischer, C., Dubois, J., Hertz-Pannier, L., Mangin, J.F.: Larger is twistier: Spectral analysis of gyrification (SPANGY) applied to adult brain size polymorphism. NeuroImage 63(3), 1257–1272 (2012)
-  Gu, X., Wang, Y., Chan, T.F., Thompson, P.M., Yau, S.T.: Genus zero surface conformal mapping and its application to brain surface mapping. IEEE TMI 23(8), 949–958 (2004)
-  Hoffmann, T., Ye, Z.: A discrete extrinsic and intrinsic Dirac operator. arXiv (2018)
-  Hurdal, M.K., Stephenson, K.: Discrete conformal methods for cortical brain flattening. Neuroimage 45(1), S86–S98 (2009)
-  Joshi, S., Pizer, S., Fletcher, P.T., Yushkevich, P., Thall, A., Marron, J.S.: Multiscale deformable model segmentation and statistical shape analysis using medial descriptions. IEEE T Med Imaging 21(5), 538–550 (2002)
-  Kamberov, G., Pedit, F., Pinkall, U.: Bonnet pairs and isothermic surfaces. Duke mathematical journal 92(3), 637–644 (1998)
-  Kazhdan, M., Solomon, J., Ben-Chen, M.: Can mean-curvature flow be modified to be non-singular? In: Comput Graph Forum. vol. 31, pp. 1745–1754. Wiley Online Library (2012)
-  Konukoglu, E., Glocker, B., Criminisi, A., Pohl, K.M.: WESD–Weighted Spectral Distance for measuring shape dissimilarity. IEEE TPAMI 35(9), 2284–2297 (2013)
-  Lam, W.Y., Pinkall, U.: Infinitesimal conformal deformations of triangulated surfaces in space. Discrete & Computational Geometry 60(4), 831–858 (2018)
-  Liu, H.T.D., Jacobson, A., Crane, K.: A Dirac operator for extrinsic shape analysis. In: Computer Graphics Forum. vol. 36, pp. 139–149. Wiley Online Library (2017)
-  Lombaert, H., Arcaro, M., Ayache, N.: Brain transfer: Spectral analysis of cortical surfaces and functional maps. In: IPMI. pp. 474–487. Springer (2015)
-  Lorenzi, M., Ayache, N., Pennec, X.: Schild’s ladder for the parallel transport of deformations in time series of images. In: IPMI. pp. 463–474. Springer (2011)
-  Luo, F.: Combinatorial yamabe flow on surfaces. CCM 6(05), 765–780 (2004)
-  Meyer, M., Desbrun, M., Schröder, P., Barr, A.H.: Discrete differential geometry operators for triangulated 2-manifolds. In: Visualization and mathematics III. Springer (2003)
-  Miller, K.L., Alfaro-Almagro, F., Bangerter, N.K., Thomas, D.L., Yacoub, E., Xu, J., Bartsch, A.J., Jbabdi, S., Sotiropoulos, S.N., et al.: Multimodal population brain imaging in the uk biobank prospective epidemiological study. Nature neuroscience 19(11), 1523 (2016)
-  Myronenko, A., Song, X.: Point set registration: Coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell. 32(12), 2262–2275 (2010)
-  Niethammer, M., Reuter, M., Wolter, F.E., Bouix, S., Peinecke, N., Koo, M.S., Shenton, M.E.: Global medical shape analysis using the Laplace–Beltrami spectrum. In: MICCAI. pp. 850–857. Springer (2007)
-  Ovsjanikov, M., Ben-Chen, M., Solomon, J., Butscher, A., Guibas, L.: Functional maps: a flexible representation of maps between shapes. ACM Trans Graph 31(4), 30 (2012)
-  Raviv, D., Bronstein, M.M., Bronstein, A.M., Kimmel, R.: Volumetric heat kernel signatures. In: ACM Workshop on 3D Object Retrieval. pp. 39–44. ACM (2010)
-  Reuter, M., Wolter, F.E., Peinecke, N.: Laplace–Beltrami spectra as ‘Shape-DNA’ of surfaces and solids. Computer-Aided Design 38(4), 342–366 (2006)
-  Vaillant, M., Glaunès, J.: Surface matching via currents. In: Biennial International Conference on Information Processing in Medical Imaging. pp. 381–392. Springer (2005)
-  Wachinger, C., Golland, P., Kremen, W., Fischl, B., Reuter, M., ADNI, et al.: Brainprint: a discriminative characterization of brain morphology. NeuroImage 109, 232–248 (2015)
-  Ye, Z., Diamanti, O., Tang, C., Guibas, L., Hoffmann, T.: A unified discrete framework for intrinsic and extrinsic Dirac operators for geometry processing. In: Computer Graphics Forum. vol. 37, pp. 93–106. Wiley Online Library (2018)
-  Zhang, M., Fletcher, P.T.: Bayesian Principal Geodesic Analysis for estimating intrinsic diffeomorphic image variability. Med Image Anal 25(1), 37–44 (2015)
Appendix 0.A Geometric interpretation of hyperedges
Letting and after straightforward manipulations, we get:
Thus conjugation by sends any vector lying on face to face , and to . Intuitively speaking, carries information about a connection structure between the affine spaces of faces and .
. Moreover, assuming the mesh to be closed, edges sum to over any given face, so that .
Definition of hyperedges as per Eq. (3) may seem somewhat arbitrary. In fact, is necessarily of the form , up to a multiplicative constant, under the following mild conditions: (a) the imaginary part of is along ; (b) ; and (c) iff face closes. Relating to the bending angle is sufficient to guarantee that spin transformations transform a net into another valid net (i.e. the real part of the transformed edge is provably consistent with the constructive definition above).
Appendix 0.B Edge Integration
Let the discrete gradient. We are looking for s.t. . When such an exists, is said to be exact (as a discrete -form, defined over edges ). In that case, Eq. (9) follows by taking on both sides.
Define the discrete curl operator , indexing as in section 3. The curl sends a 1-form (over edges) to a 2-form (over faces). If vanishes everywhere, is said to be closed. It is easy to verify that is everywhere zero, so that exactness always implies closedness. For a (discrete) spherical topology, the converse holds: closedness implies exactness. Now let be generated by a spin transformation acting on hyperedges with closed. Then is closed iff Eq. (5) is satisfied (immediate from the definition of , cf. section 4).
High-level elements of constructive proof are derived from the mesh being simply connected. It is path connected, so we can fix a vertex and reach any vertex from by following a path on edges. Let obtained by summing edges along the path. is well defined because the value at is independent of the path. Indeed let , two paths from to . Following then the reverse of , we run a closed loop. Self-intersections are removed without loss of generality. One can prove by induction on the loop length that edges sum to over the loop, thus the sum over and are equal. This holds for vertices on a single face by closedness. Closed loops of arbitrary length can always be incrementally shrunk down to this case (by simple connectivity), without changing the sum of edge values (by closedness).
0.b.0.1 Non-simply connected topologies.
Consider a path connected mesh, but possibly with handles (note that closed loops circling a handle cannot be continuously shrunk down to a trivial loop). A closed 1-form can fail to be exact if it has a non-zero harmonic component, i.e. if it can be written as for some discrete 0-form and harmonic 1-form (s.t. is closed and ). Equivalently is closed with vanishing divergence . While Eq. (9) still admits a solution , the corresponding edges, , are not the prescribed ones. Fortunately, convenient exactness constraints can be derived via the following theorem.
Theorem 0.B.1 (Helmholtz–Hodge decomposition)
The space of (alternating) 1-forms (edge flows) on admits an orthogonal decomposition into subspaces of co-exact, harmonic, and exact forms:
where is the so-called graph Helmholtzian.
This is Hodge theorem in linear algebra, noting that and . Moreover, by Hodge isomorphism theorem, the dimension of is the st Betty number , i.e. the number of handles for (generally small).
Concretely, let a set of null eigenvectors for the Helmholtzian . Edges are integrable iff is closed (Eq. 5) and orthogonal to (, ) w.r.t. the inner product on : . In Appendix 0.C, these constraints on transformed edges are turned into constraints on the spin transformation , or alternatively on the prescribed curvature map .
Remark 5 (Cotangent discretization)
Let a triangulated net. For -edge flows , define the inner product . Set edge weights to the symmetric expression where (resp. ) complete the two triangular faces adjacent to the edge . On -form, define the standard inner product where vertices are weighted by cell areas . Define the divergence operator as the negative adjoint of the gradient . Then is exactly the cotangent-weighted divergence (the sum of outbound edge flows at weighted by ) and the cotangent Laplacian. With this, the relevant inner product compatible with the cotangent scheme is now specified.
Appendix 0.C Exactness Constraint
This is Eq. (5) and already core to the framework. Algorithmically, Eq. (8) only guarantees the closedness to approximately hold, but we have observed very good agreement in practice without further constraint. If necessary, closedness can be strictly enforced as a set of real-valued constraints ( imaginary dimensions, faces), e.g. by linearizing around the current solution.
Exactness is guaranteed if . and are alternating (e.g. with the conventions of section 3) so this rewrites as a set of ( real-valued) constraints:
that can be linearized around the current solution . Alternatively, let us derive the corresponding constraint on . Consider a time flow , starting at , with time derivative , at . Deriving w.r.t. time, Eq. (5) becomes and Eq. (12) rewrites as:
where we use the alternating property to collapse the two sums and define . In other words, for spanning , . Let the unique solution to and note that . Finally, since is self adjoint, and we obtain:
where are the three imaginary components of . The constraint can be enforced by projection of the update on the orthogonal subspace of the .
0.c.0.3 To summarize:
Compute the null eigenspaceof the Helmholtzian
Compute and s.t.
Project onto the orthogonal subspace of the imaginary components of
Appendix 0.D Area distortion
We wish to penalize local scale changes (i.e. moving away from the initial area distribution ), relative to the global rescaling . Therefore the local scale change (with global rescaling factored out) writes as
with the average face area. We implement a soft constraint of the type , where defines a tolerance for area distortion. can be set by the user or jointly adjusted over the course of the iterations. After introducing Lagrange multipliers , we are looking at penalties of the form , which we approximate by linearizing w.r.t. a variation of the spin transformation . We have found this mechanism to hold over large integration steps in practice. This can be better intuited by looking at the nature of the approximations made during the linearization (see below). The approximate quadratic energy is the sum of a sparse block diagonal matrix and a low-rank (dense) term. Woodbury matrix identities allow to solve quadratic systems involving this energy without directly storing or manipulating the dense matrix.
0.d.0.2 Linearization of .
We look for a linearized approximation of for a change around the spin transformation . We start by linearizing Eq. (4). Noting that , we get:
Recalling from Appendix 0.A that and taking the norm on both sides, we get:
We can ignore the change in the cosinus of the dihedral angle (to the first order) for simplicity. Secondly we assume to be close to conformal. This assumption is coherent with the spirit of the framework, and should hold regardless if mesh quality is to be preserved locally in time. The overall transformation is still expected to progressively drift from quasi-conformality to accommodate area preservation. With this we can approximate the change in area for a small variation from the change in edge length, and we get:
where is the area when applying to the initial face-edge constraint net, resp. when applying . This yields the following expression for :
where denotes the spatial average weighted by the face areas , whereas is the inner product on quaternions. In the last expression, we made use of , keeping only first order terms.
0.d.0.3 Penalty matrix assembly.
Writing the penalty as , the penalty matrix is the sum of a sparse block diagonal term and rank-1 terms, . The derivations are tedious but straightforward, yielding:
with the use of bra-ket notation, and where stands for a normalised area . The sum of rank- updates might be degenerate (for instance it is rank- if all multipliers are equal). SVD decomposition can be used to derive an equivalent, non degenerate low-rank basis of vectors.