A Survey of Algorithms for Geodesic Paths and Distances

by   Keenan Crane, et al.
Carnegie Mellon University

Numerical computation of shortest paths or geodesics on curved domains, as well as the associated geodesic distance, arises in a broad range of applications across digital geometry processing, scientific computing, computer graphics, and computer vision. Relative to Euclidean distance computation, these tasks are complicated by the influence of curvature on the behavior of shortest paths, as well as the fact that the representation of the domain may itself be approximate. In spite of the difficulty of this problem, recent literature has developed a wide variety of sophisticated methods that enable rapid queries of geodesic information, even on relatively large models. This survey reviews the major categories of approaches to the computation of geodesic paths and distances, highlighting common themes and opportunities for future improvement.


page 6

page 17

page 18

page 19

page 26

page 27

page 32

page 39


Unconstrained and Curvature-Constrained Shortest-Path Distances and their Approximation

We study shortest paths and their distances on a subset of a Euclidean s...

How to Compose Shortest Paths

The composition problem for shortest paths asks the following: given sho...

Shortest path distance approximation using deep learning techniques

Computing shortest path distances between nodes lies at the heart of man...

Exponentially Faster Shortest Paths in the Congested Clique

We present improved deterministic algorithms for approximating shortest ...

Geodesic Distance Estimation with Spherelets

Many statistical and machine learning approaches rely on pairwise distan...

A bag-of-paths framework for network data analysis

This work develops a generic framework, called the bag-of-paths (BoP), f...

On computing distances and latencies in Link Streams

Link Streams were proposed a few years ago as a model of temporal networ...

1. Introduction

The ability to rapidly compute shortest paths and/or distances between pairs or sets of points is a fundamental operation in computational science. This survey considers algorithms for computing geodesic paths and distances, i.e., paths through curved domains that arise in a broad range of tasks across digital geometry processing, scientific computing, computer graphics, and computer vision. Relative to computing distance on Euclidean domains, this problem is complicated by the influence of curvature, as well as the fact that the domain itself may not be known exactly.

Several problems arise in this context, which have different applications, and are best tackled with different techniques: from tracing a geodesic line given a starting point and a direction; to computing the shortest path between a pair of points; to computing the distance function over a whole surface, with respect to a source point or set; to provide a framework to rapidly extract the shortest path between any pair of points. Broadly speaking, there are two major classes of methods: those rooted in computational geometry, which view a polyhedral surface as an exact description of the geometry, and those rooted in scientific computing, which view a polyhedron as an approximation of a smooth surface. Neither class of approaches is universally “better”: each may be best-suited to a particular task (e.g., finding geodesics on a CAD model, versus approximating geodesic distance on a scanned surface), and in a particular setting, according to a wide variety of trade offs in terms of accuracy, storage cost, run time performance, and scalability.

In general one might wish to compute geodesics and geodesic distances on many different data structures (point clouds, voxelizations, etc.), though in this survey we focus primarily on polyhedral surfaces, represented as triangle meshes. The survey is organized according to the different geodesic distance problems and the attendant classes of approaches to their solution (see Table 3 for a summary). After introducing the basic problems in Sec. 1.1 and providing the necessary background in Sec. 2, we review major classes of algorithms in Secs. 3, 4 and 5. In Sec. 6, we examine the relationship between mesh quality and geodesic computation. In Sec. 7, we provide a partial evaluation of the reviewed methods, on the basis of available implementations. Finally, in Sec. 8, we draw some conclusions and we highlight common themes and opportunities for future improvement in geodesic algorithms.

Previous surveys. Our focus in this survey is on practical algorithms and their behavior on real-world datasets. A relatively recent survey by Bose et al. (Bose et al., 2011) deals primarily with theoretical aspects of one particular problem (the polyhedral shortest path problem), with a focus on asymptotic time complexity and approximation bounds, as well as special cases such as convex polyhedra. Less attention is devoted to broader problems (such as geodesic distance transforms) or issues such as real-world performance, mesh quality, etc.; in this respect, such a survey is complementary to ours. Patané (Patané, 2016) provides a detailed survey on the specific topic of Laplacian spectral kernels and distances, which is likewise complementary to our account of PDE-based methods. Peyré and colleagues (Peyré and Cohen, 2009; Peyré et al., 2010) provide a nice overview of several aspects of PDE-based methods, though the last few years have seen major advancements in PDE-based methods, which we discuss in Sec. 3.

1.1. Geodesic Queries

Tasks in geometry processing may require a variety of different queries about geodesics and geodesic distances; though seemingly similar, these queries must be answered via very different algorithms. We review several important cases here.

Initial Value Problems. Perhaps the simplest type of problem is the initial value problem, which traces out a geodesic starting at a given point in a given direction. In the Euclidean case, the solution is simply a straight ray through space, though in general one must of course account for the influence of curvature; in the discrete setting one also encounters some difficulty in defining which ray is most “straight” (see Section 2.2). Computationally this problem, which we will refer to as Geodesic Tracing (GT), tends to be among the easiest of geodesic queries; we examine it in detail in Sec. 5.

Boundary Value Problems. In boundary value problems, a set of points on the domain is given as input, and one seeks (for instance) geodesic paths between these points, or geodesic distances to all these points. This type of problem is generally not as easy as an initial value problem since, for instance, one cannot simply shoot a geodesic ray from one point and hope to hit a particular target point. We consider in particular the following problems:

  • Point-to-Point Geodesic Path (PPGP): given two distinct points , find any shortest geodesic between them. (Note that in general the solution may not be unique—see Sec. 2.)

  • Single Source Geodesic Distance / Shortest Path (SSGD/SSSP): given a source point , compute a scalar function that provides the length of the shortest path from to each point . In SSGD, only the distance function is provided; in SSSP, the shortest paths are also explicitly computed.

  • Multiple Source Geodesic Distance / Shortest Path (MSGD/MSSP): in this case the source is no longer just a single point , but rather a collection of points, curves, etc.. The result is a function which for each point gives the distance to the closest point in the set; this query is also sometimes called a distance transform. In MSSP, paths are also provided.

  • All-Pairs Geodesic Distances / Shortest Paths (APGD/APSP): given a source set , compute the shortest distance/paths between all pairs of points in .

In Sections 3 and 4, we review the methods for solving these problems, categorized according to their basic algorithmic approach they take. Most methods that solve the SSGD/SSSP problems also provide a solution to PPGP as a byproduct; several easily generalize to MSGD/MSSP; only few methods address the APGD/APSP problems. In Section 4.2, we review some algorithms that are specifically developed to solve PPGP.

Note that most methods in the literature assume that initial points or boundary conditions are specified at vertices of an input triangulation, though some methods allow direct evaluation of geodesic queries at arbitrary points of the domain. For algorithms that only support queries at vertices, a pragmatic approach is to locally refine the mesh by splitting faces or edges at the query point.

1.2. Polyhedral vs. Smooth Geodesic Problem

When thinking about algorithms for computing geodesics, it is important to consider what our domain represents: does it exactly represent the geometry of interest, or is it merely an approximation of the true domain? The answer to this question depends on context. For instance, in problems like animation or CAD/CAM, where surfaces are designed by artists or engineers, we may have an exact boundary representation and want to compute exact paths along this surface. On the other hand, when a mesh is obtained via, say, 3D scanning or numerical simulation, it can only provide an approximation of the true domain, i.e., the real physical surface of interest. In this case, the accuracy of geodesic computation is fundamentally limited by the accuracy of the domain representation itself. The choice of application therefore influences how we talk about error, and also which algorithms are best suited to a given task.

In the context of polyhedral surfaces, a common misconception is that the “correct” answer is obtained by considering piecewise linear paths through the domain – though of course these paths may only be approximations of true (smooth) geodesic curves. A simple 1D example is illustrated in the inset. What is the “correct” distance between the point and the point ? If we view the polygon as an exact representation of the geometry (i.e., if we wish to compute distance along the square itself), then the geodesic distance is obtained by summing the two edge lengths . On the other hand, if we view the polygon as an approximation of the smooth circle, the distance should just be the arc length between and , i.e., . Not surprisingly, the polygonal geodesic distance is an underestimate of the smooth geodesic distance, since each straight segment takes a “short cut” from point to point. This same phenomenon occurs on triangulated surfaces: the polyhedral geodesic distance generally underestimates the smooth geodesic distance (for instance, the distance along a cube provides a poor approximation of the distance along the circumscribed sphere). To make the distinction clear in this survey, we therefore distinguish between two versions of the geodesic problem:

  • Polyhedral Geodesic Problem — view a discrete surface as an exact description of the geometry, and aim to compute exact geodesics or exact distances along this polyhedral surface.

  • Smooth Geodesic Problem — view a discrete surface as an approximation of the true geometry, and aim to compute the most accurate possible approximation of geodesics or geodesic distances.

The polyhedral problem captures the standard viewpoint of computational geometry; the smooth problem encapsulates the standard viewpoint of scientific computing. These two problems interact of course: for instance, one can use the exact polyhedral distance to approximate the true smooth distance. Surprisingly enough, however, the best strategy for approximating the smooth geodesic distance may not be to simply compute the exact polyhedral distance. For instance, even on a nicely triangulated sphere, the polyhedral distance gives only an approximation of the smooth distance (Fig. 1); greater accuracy can be achieved by considering the distance along a spline or subdivision surface that approximates the same sampling of the domain (De Goes et al., 2016; Nguyen et al., 2016). The fact that one can do better than the exact polyhedral distance should not come as a surprise: higher-order geometric elements (e.g.

, spline or subdivision patches) use a much larger stencil of information than individual triangles, which correspond to linear elements (consider, for instance, approximating the circle by four Bézier curves rather than four straight segments). On the other hand, higher-order interpolation provides no benefit for the polyhedral geodesic problem, where one is interested in the exact distance along a particular piecewise linear surface.

Figure 1. When a polyhedral surface is an approximation of a smooth surface, the “exact” polyhedral distance still does not recover the true distance of the smooth surface. For instance, even for very nice triangulations of the smooth sphere, algorithms from computational geometry improve accuracy by only one order relative to PDE-based methods (from linear to quadratic).

2. Background

In this section we provide some basic concepts and definitions that will facilitate our discussion of algorithms—for a more thorough introduction, see (do Carmo, 1976) in the smooth setting (especially Chapter 4, Sections 4–4, 4–6, and 4–7), and (Crane et al., 2013a) for the discrete setting. At a high level, geodesics can be characterized as curves that are in some sense straightest and (locally) shortest, though one must be careful about the relationship between these two characterizations (Sec. 4.1), especially on polyhedra where they may lead to different discrete definitions (Polthier and Schmies, 1998). We first give some standard background in the smooth setting (Sec. 2.1), followed by a discussion of geodesics on polyhedral surfaces (Sec. 2.2).

2.1. Smooth Setting

Figure 2. A geodesic is any “straight” curve between two points and , and are a feature of the intrinsic geometry of the surface: they do not change if we bend the surface, so long as there is no stretching, shearing, or ripping. Though the shortest path will always be a geodesic (here, ), a geodesic is not necessarily the shortest path (consider ).

Intrinsic vs. Extrinsic Geometry. When studying geodesics, how should we describe the shape of the domain? An important feature of geodesics is that they depend only on distances along the surface, and not at all how the surface sits in space. As a concrete example, consider a piece of paper rolled up into a tube (Fig. 2): a straight line drawn between two nearby points and on the flat piece of paper is still a shortest curve on the tube (see Fig. 2). One can find an even shorter path by drawing a straight line through space, but this line is no longer a path along the surface. Likewise, there are many other ways we could bend the tube without changing the shape or length of shortest paths—for this reason, when studying geodesics we really want to ignore the extrinsic geometry of the surface (how it sits in space), and focus purely on its intrinsic geometry (only those things that can be measured by walking along the surface). The desire to compute shortest intrinsic (rather than extrinsic) paths is of course quite natural: for instance, the “shortest” route between two cities is the one that best avoids mountains and valleys—not the one that tunnels straight through the Earth!

Shortest vs. Straightest Curves. The tube example also helps to illustrate another feature of geodesics: shortest implies straight, but straight does not necessarily imply shortest. Consider, for instance, a long straight line from that “goes off the edge of the map” before returning to . This curve is still a geodesic, but it is not a shortest geodesic—in particular, it is not as short as . Finally, the word “shortest” does not imply that there is a unique geodesic of minimal length. For instance, there may be two equally short ways to go around a hill; in fact, there may be infinitely many shortest paths between two given points, such as the north and south pole of a sphere.

Unfortunately, finding geodesics is not as simple as just “unrolling” a smooth surface into the plane and finding straight lines (as we did with the tube), since most surfaces cannot be flattened without distorting distances. Consider for instance the many different map projections used by cartographers (Mercator, Robinson, etc.), none of which preserve geodesic distances. However, we can nonetheless reason about shortness and straightness of curves: for instance, a curve on an embedded surface is “straight” if there is no acceleration as we travel along the curve, except in the normal direction—in other words, if we only turn by the bare minimum amount needed to remain on the surface. Alternatively, we can adopt the perspective of Riemannian geometry, which allows one to reason about intrinsic geometry without thinking about how it is embedded in space.

2.1.1. Smooth Surfaces

In the intrinsic picture, our main object of study is a smooth surface with Riemannian metric . The fact that is smooth implies that at each point we have a tangent space , where each tangent vector

specifies a vector pointing “along” the surface,

i.e., all possible directions we can travel away from . Since we have no information about how the surface sits in space, the one and only way to measure lengths and angles of tangent vectors is via the Riemannian metric, which for tangent space provides a positive-definite inner product . For instance, the length of any tangent vector is given by ; the angle between any two tangent vectors is given by , just as in ordinary Euclidean . In fact, whenever the surface is sitting in space, the Riemannian metric and Euclidean inner product coincide. When the meaning is understood from context, or when working with a vector field (i.e., a choice of vector in each tangent space), we can drop the subscript .

2.1.2. Shortest Curves

The Riemannian metric enables us to easily define geodesic curves in terms of conditions on length. In particular, consider any differentiable map from an interval of the real line into the surface , and let ; we say that is arc-length parameterized if for all . We can express the length of any curve as

The geodesic distance between any two points is the infimum of length over all curves such that and . An arc-length parameterized curve is a shortest geodesic if it achieves this infimal length. An arc-length parameterized curve is a geodesic if it is “locally shortest,” i.e., if there is some such that is a shortest geodesic when restricted to any interval such that . Intuitively, a shortest geodesic is a curve that cannot be made any shorter (without moving its endpoints); a geodesic is a curve that cannot be made shorter by adjusting any small piece of it.

2.1.3. Straightest Curves

We can also characterize geodesics in terms of straightness rather than shortness. This perspective is best understood from a dynamic point of view: imagine we are driving along a road at constant speed, and never turn our wheel left or right. Although we may encounter hills and valleys along the way, our trajectory is in some sense as straight as it possibly could be. In the extrinsic setting the only acceleration we experience is in the direction normal to the surface, i.e., the minimal possible acceleration needed to keep our vehicle on the ground. In the intrinsic setting, we do not need to worry about this normal acceleration since our surface is not sitting in space—instead, we just say that a geodesic is a curve of zero acceleration.

How can we measure the acceleration of a curve in a surface ? In the Euclidean plane, we can just take the second derivative of with respect to . In the Riemannian setting, however, the velocity vectors and at two different times will in general belong to two different tangent spaces , and hence cannot be compared directly. Instead, we can use the Levi-Civita connection or covariant derivative , which (intuitively) uses the metric to measure how much is turning as we move along the direction . In particular, a curve is a geodesic if its tangent exhibits zero turning as we move along the tangent direction, i.e., if

at all times . (Since the covariant derivative is local, it does not help us define shortest geodesics.) More generally, for any arc-length parameterized curve we have

where for each time , the vector is the unit normal to the curve (i.e., a unit vector orthogonal to the tangent), and the scalar is the geodesic curvature, which measures the rate at which the curve is bending. A geodesic is then a curve with zero geodesic curvature.

We can also understand geodesic curvature from an extrinsic point of view. Suppose we have a map assigning each point of the surface a location in space, and let be the corresponding Gauss map giving the unit normal at each point. A curve on the surface can now be realized as a curve in Euclidean space. Assuming is parameterized by its arc-length , its unit tangent is given by ; we will use to denote the normal to the curve (as opposed to the normal of the surface). The derivative of in turn describes the curvature of the curve, which can be decomposed into two scalar components:

Note that there is no derivative in the direction, since is arc-length parameterized and hence its tangent doesn’t change length. In other words, the geodesic curvature describes how much the curve is bending “in plane”; the normal curvature describes how much is bending “out of plane.” Geodesics are again curves of zero geodesic curvature, i.e., those that go straight along the surface (and bend purely to remain on the surface). An illustration is provided in Fig. 3.

Figure 3. Extrinsically, the curvature of a curve can be decomposed into the normal and geodesic curvatures , which measure the change in the tangent in the direction of the surface normal and the curve normal , respectively (left). A “wiggly” but planar curve will have zero normal curvature and nonzero geodesic curvature (center), whereas a great arc on a sphere has zero geodesic curvature but nonzero normal curvature (right).

2.1.4. Exponential Map

At any point and any unit vector , there is a unique geodesic traveling away from in the direction , i.e., such that (this perspective leads to the tracing algorithms discussed in Sec. 5). More generally, the exponential map takes any tangent vector at to the point of reached by walking in the direction a distance . In general, however, this map is not invertible: there can be distinct points that are reached by starting at and walking along geodesics in different directions (or for different distances). Any neighborhood of over which is invertible is called a normal neighborhood of ; on any such neighborhood, the inverse of the exponential map defines the logarithmic map . The logarithmic map provides local coordinates around called normal coordinates, by effectively “flattening out” a small patch of the surface onto its tangent space. Polar coordinates on the tangent space are sometimes called geodesic polar coordinates; here, geodesics can be characterized (locally) as lines of constant angle. Circles in this tangent plane are mapped by the exponential map to geodesic circles, which orthogonally intersect geodesic rays from (Gauss’ lemma).

Away from a normal neighborhood, geodesics may start behaving in less intuitive and undesirable ways. We review some global aspects of geodesics in Sec. 4.1.

2.1.5. Geodesic Distance

Many of the algorithms discussed in this survey aim not to compute individual geodesic curves, but rather the geodesic distance function over the entire surface. In particular, for any two points , is the smallest length of any geodesic between and . This function satisfies all the properties of a distance metric, i.e., nonnegativity (), nondegeneracy (), symmetry () and triangle inequality (); these properties will be important to keep in mind when studying numerical approximations of geodesic distance. A geodesic ball is the set of all points such that .

Cut locus.

For a given point , the injectivity radius is the radius of the largest geodesic ball such that there is a unique shortest path from any point back to . Outside this radius, there are points with two or more shortest paths to . The collection of all such points is called the cut locus. More generally, for any source set (e.g., a collection of isolated points, or a network of curves, surfaces, etc.) the cut locus is the set of points for which there is not a unique shortest path to some point in . The geodesic distance function is smooth away from the cut locus. In most situations, it is not particularly important that geodesic distance algorithms accurately reconstruct the cut locus: only that the distance values or paths computed at each point are accurate. However, some applications (e.g., computing the medial axis of a shape) may require an accurate cut locus.

2.2. Polyhedral Setting

Geodesics on polyhedral surfaces behave somewhat differently from geodesics on smooth surfaces—especially in the vicinity of vertices, where the surface fails to be smooth. Methods coming from computational geometry (Sec. 4) must therefore carefully consider the specific behavior of polyhedral geodesic paths; methods based on the finite element viewport (Sec. 3) largely side-step these questions by approaching geodesic computation from the perspective of function approximation rather than explicit path tracing.

A polyhedral surface is, roughly speaking, a collection of Euclidean (planar) polygons glued together to form a surface. Since any planar polygon can be triangulated (and the choice of triangulation has no effect on geodesics), we will assume that the combinatorics of any polyhedron are encoded by a simplicial complex with vertices , edges , and faces . For simplicity we also assume that this complex is manifold, i.e., the link of every vertex interior vertex is a single closed loop; the link of every boundary vertex is a single path (see inset figure).

Figure 4. Intrinsically, a vertex of a polyhedral surface looks like either a circular cone, a flat disk, or a saddle, depending on the sign of the angle defect .

The geometry of a polyhedral surface can be specified in one of two ways: either extrinsically, using vertex coordinates which are linearly interpolated over each triangle, or intrinsically, by specifying positive edge lengths that satisfy the triangle inequality in each face: for all . Of course, one can always obtain edge lengths from vertex coordinates (), though the vast majority of geodesic algorithms can in principle be implemented without reference to vertex coordinates—reflecting the fact that geodesics are a purely intrinsic object. Instead, quantities like triangle areas and interior angles at triangle corners can be computed directly from edge lengths (using expressions like Heron’s formula and the law of cosines). This fact is especially useful given that in some geometric problems (e.g., manifold learning) one may have distances between points, but not an explicit embedding in .

For each vertex , the angle defect is the quantity

where denotes the interior angle at vertex of triangle , and the sum is taken over all triangles containing vertex . Intuitively, this quantity captures the “flatness” of the vertex, and is often viewed as a discrete analogue of the Gaussian curvature. An important mental image is provided in Fig. 4: imagine that the triangles around a vertex are flat pieces of paper glued together along edges. Depending on the value of , these triangles can be bent into a smooth circular cone, a flat circular disk, or a saddle-like figure, all without changing geodesic distance. We will therefore refer to vertices with positive, zero, and negative angle defect as spherical or cone-like, Euclidean, and hyperbolic or saddle-like, respectively. This local picture nicely encapsulates the intrinsic geometry of any polyhedron: it is smooth and intrinsically flat away from the vertices—even the location of edges is irrelevant when it comes to thinking about geodesics and geodesic distance, since the edges are effectively invisible to an intrinsic observer. The sign of plays an especially important role in the context of polyhedral geodesics, since shortest geodesics will often pass through vertices where , but can never pass through vertices where (as will be discussed in Sec. 2.2.1).

Working with polyhedral rather than smooth surfaces has some interesting geometric consequences. On the one hand, since each individual triangle is flat, we can study geodesics by “unfolding” local neighborhoods into the plane, i.e., by finding an arrangement of vertices in that agrees with the intrinsic edge lengths—several examples are shown in Fig. 4. This picture makes it clear that, locally, geodesics on polyhedral surfaces can be constructed by simply drawing line segments in the Euclidean plane. The main computational challenge, therefore, is answering more global questions: for example, which sequence of triangles must we unfold to find the shortest such line? An analogous perspective is not generally not available for smooth surfaces, since any local flattening will invariably distort lengths (i.e., geodesics are rarely straight lines in local coordinates). On the other hand, the fact that our surface is no longer smooth makes the definition of polyhedral geodesics somewhat more nuanced—especially in the vicinity of vertices.

2.2.1. Shortest vs. Straightest

In the smooth setting we had two equivalent characterizations of geodesic curves: they are both straightest (Sec. 2.1.3) and locally shortest (Sec. 2.1.2). As studied by Polthier & Schmies (Polthier and Schmies, 1998), these two characterizations are no longer equivalent in the polyhedral setting. This situation reflects a common “no free lunch” scenario in the discretization of objects from differential geometry (Crane and Wardetzky, 2017), namely that one typically cannot find a single definition (in this case, for discrete geodesics) that exactly captures all the key properties of the original smooth object (in this case, both straightest and locally shortest).

Locally, polyhedral geodesics essentially behave the same as in the smooth setting. Consider for instance a pair of points contained in the same triangle—here geodesics are just ordinary line segments, which are both shortest and straightest. Likewise, for two points close to a common edge (and away from any vertex) we can simply unfold the two incident triangles into the plane and connect them by the unique shortest, straightest segment (see Fig. 5, left). Globally, however, the situation is more complicated due to non-smooth points at vertices.

Straightest. To find the straightest path leaving a point in a direction , we can simply apply the local observations made above: inside a given triangle the shortest path is found by extending a straight ray along ; to continue this path into the next triangle we can imagine unfolding neighboring triangles into the plane and extending this ray into the next triangle. The resulting path corresponds to a straight line along a strip of planar triangles, as depicted in Fig. 5, right. (Note that for very long paths we may encounter the same triangle more than once, in which case we would have multiple copies of this triangle in the unfolding.) This tracing operation effectively defines a discrete version of the exponential map discussed in Sec. 2.1.4.

What should we do if our path enters a vertex ? In particular, which outgoing direction describes the “straightest” curve? Unless the angle defect is equal to zero, we cannot simply unfold triangles into the plane. An idea considered by Polthier & Schmies (Polthier and Schmies, 1998) is to instead pick the outgoing direction such that there is “equal angle” on either side. More precisely, let be the incoming direction; we can define the outgoing direction as the one such that the total angle between and is exactly half the sum of interior angles at vertex (see inset figure). Equivalently, one can work in a local polar coordinate system where angles are normalized to sum to ; this viewpoint has been carefully studied by Troyanov (Troyanov, 1986), and was later adopted in geometry processing for problems involving polyhedral geodesics (Polthier and Schmies, 1998) and tangent vector fields at vertices (Zhang et al., 2006); Sharp et al. (Sharp et al., 2018, Section 5.2) provides a concise description. As in the smooth case, this definition of straightness yields a unique solution to the initial value problem, even for paths through vertices with nonzero angle defect.

Shortest. In contrast, the behavior of shortest curves on a polyhedral surface depends critically on the sign of the angle defect . Consider for instance two points directly opposite a cone-like vertex (), as depicted in Fig. 6. By symmetry, one might expect that the shortest route between these points is to walk along the straightest possible path from to , then from to . However, one can find an even shorter path by walking “around” the cone—just as one might find a shorter path by walking around a hill, rather than walking over it. In particular, if we cut the cone from the boundary through the point and up to the vertex , then the straight line segment from to either copy of gives us a shortest path. Hence, straightest curves on a polyhedral surface are not always shortest. In this case, is not even locally shortest, since even in a tiny region around would can make it slightly shorter by going around vertex , instead of through it. In general, it is never be advantageous (in terms of path length) to pass through a spherical vertex. Therefore, the only shortest geodesics passing through a positive vertex are those terminating at .

On the other hand, if is a saddle vertex (), one can often find much shorter paths by passing through vertices. Some basic intuition is provided by Fig. 4, bottom right: to go from one side of a saddle region to another, it is much quicker to go straight through the vertex than to travel across numerous “ripples.” In fact, starting with a straight line from a point to a saddle vertex , there will be infinitely many outgoing directions that yield a shortest path; these directions form a wedge-like region of angle around the incoming direction (see inset figure). The union of the incoming path with the wedge is sometimes called a funnel, and is the starting point for a large family of algorithms reviewed in Sec. 4.1. Again the “shortest” and “straightest” pictures disagree: a straightest geodesic passing through a saddle vertex must bisect the funnel, whereas a shortest geodesic can include any from contained in the funnel.

Figure 5. Any pair of adjacent triangles can be unfolded into the plane without distorting distance (left). A geodesic on a polyhedral surface is therefore equivalent to a straight line along some planar triangle strip—so long as it does not pass through any vertices.
Figure 6. The triangles around any vertex with positive angle defect (left) are intrinsically no different from a smooth cone (center). Therefore, the shortest path between two points on a polyhedral surface will never pass through a positive vertex: we can always cut open the cone in a way that lets us draw a straight line from to without passing through . When sits directly between and , this path will not even be unique (right).

2.2.2. Discrete Exponential Map

The analysis above has important implications for the exponential map on a polyhedral surface. Consider for instance tracing straightest geodesics in every possible direction from a vertex . When paths hit a spherical vertex , they split into two groups which meet discontinuously on the opposite side of . When paths hit a saddle

vertex , they again split into two groups, which do not completely cover the funnel opposite . In either case, the exponential map fails to be injective as soon as we hit any non-flat vertex; in other words, the injectivity radius is simply the distance to the closest vertex. Moreover, every spherical vertex of a polyhedral surface is contained in the cut locus. As explored by Itoh & Sinclair (Itoh and Sinclair, 2004), this means that algorithms which exactly compute distance on a polyhedral surface will (surprisingly enough) do a very poor job of approximating the cut locus of the smooth surface it approximates. For instance, the cut locus on a polyhedral sphere will contain every single vertex, whereas the cut locus of a smooth sphere consists of just a single point.

3. PDE-based Methods

A large number of methods for computing geodesic distance are based on formulating the problem in terms of partial differential equations (PDEs) on a smooth manifold, then discretizing and solving these PDEs via,

e.g., finite element methods (FEM) or other numerical techniques. These methods are generally suitable for computing the single or multiple source geodesic distance (SSGD/MSGD); explicit geodesic can subsequently be obtained by, e.g., tracing curves through the gradient of the distance function (though such tracing requires careful consideration in order to achieve accurate results; see Sec. 5

). Some of these methods are also quite attractive for solving all-pairs geodesic distance problems (APGD), since their solutions are well-approximated by precomputing a relatively small collection of eigenfunctions (Sec. 

3.3.1). PDE-based methods are attractive because they are built on top of well-established techniques from scientific computing (such as FEM), as opposed to computational geometry methods (Sec. 4) which must be built “from the ground up.” As a result, such techniques often come with a clearer picture of, e.g., convergence rates under refinement, making them particularly well-suited for the smooth geodesic problem (Section Sec. 1.2). Moreover, they easily generalize to problems involving multiple or curved sources, and can often be implemented on data structures beyond triangulations (as shown in Fig. 7). On the other hand, since these methods compute only the geodesic distance , additional work must be done if one wishes to extract geodesic paths, as discussed in Sec. 5.

PDE-based methods can be categorized according to the type of equation used to characterize geodesic distance. Different starting points will lead to different numerical treatments, which subsequently have different computational trade offs (e.g., different mesh or solver requirements). At a high level there are two basic classes of methods:

  • Wavefront-based. The basic principle behind wavefront-based methods resembles (in spirit) classical methods like Dijkstra’s algorithm, or the window-based methods discussed in Section 4: information about distance propagates outward from a given source point. In the continuous setting, this perspective corresponds to hyperbolic PDEs, i.e., those that describe the evolution of a wavefront emanating from the source.

  • Diffusion-based. Diffusion-based methods more closely resemble problems arising in, say, spectral graph theory: information about distance is obtained by way of functions associated with a discrete Laplace operator, computed via a process that looks more like repeated local averaging rather than wavefront propagation. In the continuous setting, this perspective corresponds to elliptic and parabolic PDEs such as Poisson equations and heat diffusion.

Trade offs.

Historically, wavefront-based methods were developed prior to most diffusion-based algorithms; as a result, a wide variety of higher-order accurate strategies have been developed for regular grids on Euclidean domains (e.g., (Ahmed et al., 2011)), motivated in large part by accurate numerical solvers for level set equations (Osher and Fedkiw, 2003a). However, most of these techniques do not immediately generalize to the setting of curved surfaces, where one typically does not have a regular, uniform tessellation of the domain needed to support (for instance) larger finite difference stencils. Diffusion-based algorithms tend to have some nice performance advantages, since they are largely based on solving easy linear elliptic equations rather than more difficult nonlinear hyperbolic ones, though solutions can sometimes be over-regularized (e.g., near the cut locus of the geodesic distance function). A particularly nice feature of linear methods is that one can often pre-factor the associated matrices, which in practice yields about an order of magnitude improvement in amortized performance using modern direct solvers (Chen et al., 2008; Schenk et al., 2001).

Meshing requirements for elliptic methods also tend to be less stringent than for hyperbolic ones; see Sec. 6 for further discussion. As discussed in Sec. 1.2, the accuracy of all PDE-based methods (as well as all methods from computational geometry) is fundamentally limited by the accuracy of the domain representation itself (e.g., the use of linear elements to approximate curved surfaces). The only real resolution to this issue is to compute geodesic distance on higher-order surface representations such as splines and subdivision surfaces; several authors have recently considered this approach (De Goes et al., 2016; Nguyen et al., 2016).

Overall, there is a rather nice viewpoint that unifies all current PDE-based methods: they are all effectively trying to minimize the residual of the eikonal equation , albeit by very different means (see especially the discussion at the end of Sec. 3.3.3. Since this energy is nonlinear and nonconvex, it is not surprising to find that many different computational strategies have been proposed: hyperbolic methods like fast marching deal with nonlinearity by taking advantage of special update orderings, whereas elliptic methods like the heat method decompose the problem into two linear pieces connected by a nonlinear change of variables. Very little work has been done on synthesizing these perspectives (e.g., combining fast linear approximations with carefully-ordered updates), though is likely to be fruitful, especially given the diverse range of application problems and machine architectures on which geodesic distance queries are needed.

Figure 7. Unlike methods from computational geometry, which are specially tailored to triangulations, PDE-based methods can easily be implemented on a variety of data structures by applying standard discrete differential operators. Here, we show a variety of implementations of the heat method using different discrete Laplace operators.

3.1. Laplace-Beltrami and Cotan Laplace

PDE-based methods for geodesic distance computation all have a close relationship to the Laplace-Beltrami operator , which is discretized by a (weighted) graph Laplacian. In particular, for a graph with edge weights , the graph Laplacian is encoded by a real symmetric matrix with off-diagonal entries

and diagonal entries

A simple choice of edge weights is just , though of course these weights do not capture much geometric information. When the vertices have associated vertex positions , the edge weights can be determined by some function of the edge lengths; one common choice is to use Gaussian edge weights (for some small parameter ), which is motivated by the close connection between the Laplace-Beltrami operator and the Euclidean heat kernel (see Sec. 3.3). Finally when the edges of the graph come from the edges of a triangulated surface, one typically adopts the cotangent weights

where are the interior angles opposite edge (or zero, for edges on the domain boundary). More broadly, these discrete Laplacians are only the tip of the iceberg: discrete Laplace operators have been developed for a large variety of geometric data structures, providing a wide variety of options for implementing PDE-based geodesic schemes (see especially the discussion in Sec. 3.3.3). Some methods, such as those based on diffusion (Sec. 3.3) easily generalize to these settings by just “plugging in” a different Laplace matrix; other methods, such as fast marching (Sec. 3.2.1) do not immediately generalize since they may depend on particular features of a discretization (e.g., the fact that values at the vertices of a simplex uniquely determine an interpolating affine function). Note that since the Laplace-Beltrami operator depends only on the intrinsic geometry of the domain (i.e., its metric) and not the way it sits in space, any distance algorithm defined in terms of the Laplacian (including all the methods we consider here) will automatically be isometry invariant.

3.2. Wavefront Propagation

The basic prototype for wavefront-based methods is the linear hyperbolic wave equation


where for each time , is a real-valued function on a manifold , and is the Laplace-Beltrami operator on (for , is just the standard Laplacian). The intuitive connection to geodesic distance is that a perturbation at some initial point (e.g., a small stone dropped in a pond) will send out a wavefront that maintains a constant distance from the source point. Hence, the arrival time of the wavefront is correlated with the geodesic distance. Though a few methods extract distance information from the classical wave equation (Sinha and Kazhdan, 2016) or the quantum mechanical (Schrödinger) wave equation (Gurumoorthy and Rangarajan, 2009), by far the most common approach is to consider the nonlinear hyperbolic eikonal equation


which directly characterizes the distance function in terms of the norm of its gradient . Intuitively, this equation says something very straightforward: on the boundary of the domain (i.e., at any point in the source set), the distance should be zero. At every other point of the domain, the change in distance per unit distance along the direction of greatest increase should simply be 1. However, actually solving this equation is not as straightforward since it is nonlinear and hence cannot be approximated using standard linear finite element methods. Instead, the general strategy is to iteratively update the solution using techniques like (nonlinear) Gauss-Seidel; for carefully-crafted update orders (and suitable conditions on the mesh geometry) such strategies can actually converge in a single iteration, corresponding to well-established algorithms such as fast marching (this perspective is nicely explained by Bornemann and Rasch (Bornemann and Rasch, 2004)).

3.2.1. Fast Marching

The fast marching method was originally developed for distance transforms on Euclidean domains discretized as regular grids; the basic principles of this method and its variants (e.g., fast sweeping (Luo, 2013)) are shared by a broad class of schemes used in level set methods (Osher and Fedkiw, 2003b) and to solve Hamilton-Jacobi equations (of which the eikonal equation is one example). Kimmel and Sethian developed a level set method for curved domains represented as triangulated surfaces (Kimmel and Sethian, 1998). The basic strategy of this method is very similar to Dijkstra’s shortest path algorithm: set the distance at the source point (or set) to zero, and set the tentative distance to infinity; then use a region-growing strategy to update the remaining distances in an “upwind” order, i.e., consider the node with smallest distance first. Like Dijkstra, the running time is therefore , since for a triangulation, .

Relative to Dijkstra, the key modification is that distances are not updated according to paths along edges; instead, one updates the distance value at a vertex by solving for the linear function that satisfies the eikonal equation (Kimmel and Sethian, 1998, Section 4.1). In particular, if the values at two corners are known, one simply needs to pick a third value so that the slope of a triangle passing through all three values equals (as illustrated in the inset figure).

We can connect the fast marching method to other PDE-based methods by observing that the term is the integrand of the Dirichlet energy, which in turn can be expressed in terms of the cotan Laplace operator. In particular, consider a triangle where the distance values at vertices and are known, and the distance at vertex remains to be determined. Suppose we encode these distance values as a column vector , and let be the local cotan matrix given by

(see for instance (Crane et al., 2013a, Section 6.2)). A linear function interpolating the distance values at the three vertices will then satisfy the eikonal equation if , which we can view as an ordinary quadratic equation for the unknown distance . Solving this equation yields (in general) two solutions corresponding to positive and negative slope; the update will always use the larger value, since distance increases monotonically as we move away from the source.

A basic difficulty with wavefront propagation methods is that the order in which vertices are updated may violate causality, i.e., even if a vertex is closer to the source than a vertex , the distance at may be finalized prior to the distance at ; hence, the solution will fail to be monotonic (two examples of how this can happen are shown in Figure 8). Sethian and Kimmel suggest to resolve this issue by an “unfolding” procedure, but this procedure is nonlocal and may not terminate before reaching the boundary. In general, one therefore has to either apply an iterative strategy (as discussed by Bornemann and Rasch (Bornemann and Rasch, 2004)), or perform remeshing, as discussed in Sec. 6. From a computational point of view, the basic challenge with any method based on wavefront propagation (including Dijkstra’s algorithm) is that it is difficult to parallelize: the upwind strategy effectively induces a serial ordering on computation. Since the order of computation is data-dependent (i.e., it depends on earlier distance values), it can also lead to significant dynamic branching and incoherent memory access compared to linear methods (Sec. 3.3) which more typically depend on a fixed traversal order (e.g., using a fixed matrix factorization).

Figure 8. Methods based on solving hyperbolic equations may fail to respect causality in the presence of obtuse angles. For instance, in the left figure a source at 1 will get propagated to 2 and 3 before 4, even though 4 is closer. In the worst case this phenomenon can be highly nonlocal, as seen on the right where the node closest to the source node 1 is the last one to be updated.

3.3. Diffusion

The basic prototype for diffusion-based methods is the linear parabolic heat equation


where for each time the function is a real-valued function on the domain. Despite the very similar form of Eqn. 3 and Eqn. 1, they have very different behavior and hence lead to very different computational methods. Whereas small perturbations to yield high-frequency oscillations in solutions to the wave equation, such perturbations are diffused or smoothed out by the heat equation, hinting at greater stability (e.g., small computational errors are not propagated forward in time). In particular, if is a Dirac delta centered at a point , then the heat kernel at is the solution to the heat equation


More generally we will use to denote the heat kernel at time , centered at the point , and evaluated at the point . In the Euclidean case the heat kernel is just a Gaussian of increasing width and decaying magnitude, though in general it has no closed form (see for instance (Sharp et al., 2018, Section 3.2)). Any other solution to the heat equation can be expressed as a convolution with the heat kernel, i.e., a gradual “blurring out” of the initial data. The heat equation also has an important statistical interpretation: if one views the initial function as describing the spatial distribution of a large number of discrete particles, then the solution describes the distribution after these particles have taken “random walks” (i.e., Brownian motion) for time  (Lawler, 2010).

Diffusion-based methods were initially used to compute smooth distance-like functions (Sec. 3.3.2) or functions that satisfy the axioms of a distance metrics, but do not actually provide the geometric distance between points (Sec. 3.3.1); more recently, a diffusion-based strategy was introduced that provides the true geodesic distance (Sec. 3.3.3). Computationally, the appeal of all such methods is they amount to sparse linear systems that can be efficiently solved using standard techniques from numerical linear algebra. As a result, one does not have to develop solvers specially tailored to the task of computing geodesic distance, as with the hyperbolic methods discussed in Sec. 3.2 or the polyhedral strategies in Sec. 4. Instead, any development in numerical linear algebra (such as more accurate linear solvers, or fast parallel implementations) can immediately be used to improve computation of geodesic distances. Diffusion-based methods have also become popular due to their ease of implementation, which generally involves only three basic ingredients:

  • Building a weighted graph Laplacian associated with the mesh;

  • Solving one or more sparse linear algebra problems involving ;

  • Evaluating simple per-triangle or per-vertex expressions.

Per-element operations might be performed either before or after solving linear problems (or both), but generally amount to evaluations of, e.g., simple closed-form sums (this pattern is in fact shared by all methods we will discuss in this section). The bulk of the complexity is therefore taken care of by the linear solver, which can be treated as a “black box”: one does not have to implement complex topological data structures, or even maintain a priority queue. On the flip side, this formulation assumes that a suitable mesh is already provided as input; see Sec. 6 for further discussion.

3.3.1. Spectral Distances

The original motivation for diffusion-based distances comes from a desire to embed an abstract surface or manifold into a Euclidean space – such an embedding allows one to approximate distances on the original manifold by measuring the ordinary Euclidean distance between points in , though this approximation may only roughly resemble the true geodesic distance. This point of view was originally studied by Bérard et al. for a different purpose: to study the precompactness of smooth manifolds with bounded diameter and curvature (Bérard, 1986, Chapter VI E.53),(Bérard et al., 1994)

. Later, this same point of view became a common theme in the context of data analysis, machine learning, and dimensionality reduction, where the manifold

and Laplace-Beltrami operator are replaced with a discrete graph and graph Laplacian . In particular, let and

be the eigenvectors and eigenvalues of

, so that


Belkin & Niyogi propose an embedding by Laplacian eigenmaps that maps each vertex to the coordinates (for some choice of ), arguing that this embedding minimizes the distortion of edge lengths (Belkin and Niyogi, 2003). A notion of distance is then given by , where

is just the Euclidean norm. Gobel & Jagers study a notion of distance that is closely connected to the random walk interpretation of the heat equation – in particular, they interpret a graph as a Markov chain with probabilities determined by edge weights, and show that the expected time to walk from a vertex

to a vertex and then back to determines a metric on this graph (Göbel and Jagers, 1974). This commute time distance is equivalent to the effective resistance between and in an electrical network (Klein and Randić, 1993), which is also connected to the Laplacian  (Saerens et al., 2004; Fouss et al., 2007). Lipman et al. define a closely related biharmonic distance based on the bi-Laplacian rather than the Laplacian  (Lipman et al., 2010). Coifman & Lafon instead define a diffusion distance which captures the amount of information that diffuses from a vertex to a vertex after time  (Coifman and Lafon, 2006). In fact, all of these distances can be linked back to a discrete diffusion process – consider in particular the (discrete) harmonic Green’s function , which is the solution to the equation (where is the Kronecker delta centered at ). This function can also be viewed as the stationary solution to the discrete heat equation with fixed (Dirichlet) boundary conditions at vertex . Letting be the corresponding function for the bi-Laplacian, we can write the three distances mentioned above as

where denotes the solution to the discrete heat equation at time and with initial conditions . Evaluating each of these distances for a given pair of vertices therefore amounts to solving a small number of linear equations. If, however, one wishes to compute the distance between a large number of vertex pairs (e.g., to solve the APGD problem), it is often more efficient to use a spectral expansion of these distance functions, i.e., to write them in terms of the eigenvalues and eigenvectors of the discrete Laplace operator (Equation 5), leading to a unified view of all four distances defined above:

Note that the global point signature of Rustamov (Rustamov, 2007) yields an identical distance to the commute time distance . At this point it becomes apparent that all of these distances are variations on a common theme: embed the surface in according to the eigenfunctions; compute a weighted Euclidean distance in the embedding space. The different choice of weights will impact the regularity and other features of the resulting distance function—for instance, the diffusion distance exhibits a degree of anisotropy around the source that depends on the diffusion time , as shown in (Lipman et al., 2010, Figure 2).

Notably, none of these distances give a good approximation of geodesic distance: on the whole they look more like highly regularized versions of the true geodesic distance, and cannot resolve the cut locus. However, they do provide an extremely efficient way to obtain a rough proxy for distance: since all of these functions are quite smooth, one can obtain a good approximation by using a truncated series where is much smaller than the number of vertices (in practice, around 150–200). In this case one can precompute the eigenfunctions, at which point it becomes extremely efficient to compute the point-to-point geodesic distance (by just evaluating a small sum), and by extension, solving the APGD problem for a small subset of vertices (e.g., a collection of landmark points) becomes relatively inexpensive. However, computing the distance to a large set of vertices (such as the boundary of the domain) may be expensive, since one needs to evaluate a large sum for each point; here, other algorithms discussed in this section may be more appropriate. Finally, since these distances are ultimately derived from distances in Euclidean , they exactly satisfy key properties of a metric: nonnegativity, symmetry, and triangle inequality. Note that any embedding into will immediately satisfy these properties; likewise, as with any algorithm built on top of the Laplace-Beltrami operator, these distances are also isometry invariant.

Further connections between distance and spectral approximation are detailed in the recent survey by Patané (Patané, 2016).

3.3.2. Poisson-based Methods

There is a large class of methods for generating smoothed distance-like functions to the boundary of a domain by solving linear elliptic equations; these methods are largely motivated by problems from image processing and computer vision, such as extracting medial axes or other “skeletons” from image regions. For instance, Tari et al.(Tari et al., 1997) solve the screened Poisson equation


and Gorelick et al. (Gorelick et al., 2006) solve a Poisson equation


derivatives of such functions can then be used to extract an approximate cut locus/medial axis to serve as a “skelton” of the region . Many variants of this strategy have been considered that yield more distance-like functions, such as applying various pointwise transformations or “normalizations” to the solution of PDEs like Equation 6 or 7; such as the Spalding-Tucker transformation  (Belyaev and Fayolle, 2015, Section 6). Much like the functions studied in Sec. 3.3.1, however, none of these smoothed functions correspond to the actual geodesic distance . Moreover, unlike the functions from Sec. 3.3.1, they do not provide a well-defined distance metric , but rather just compute the distance transform of the domain boundary .

3.3.3. Heat Method

The diffusion-based perspective can also be used to compute the true geodesic distance, rather than a distance-like function, via the heat method (Crane et al., 2017). This method is ultimately connected to both the eikonal equation discussed in Sec. 3.2 as well as the Poisson-based approaches from Sec. 3.3.2, though the starting point is an important observation about the close relationship between the geodesic distance function (Sec. 2.1.5) and the short-time heat kernel (Equation 4) known as Varadhan’s formula (Varadhan, 1967):


This formula effectively says that if a “pin prick” of heat centered at the point diffuses for a very short time , then the resulting heat distribution looks nearly identical to the geodesic distance function, up to a simple transformation of the value at each point. However, it is quite challenging to compute a numerical approximation of the heat kernel that decays at exactly the right rate; Crane et al. sidestep this issue by recalling the eikonal equation (Equation 2), which says that the gradient must have unit magnitude (). Therefore, any function that is a monotonic function of distance can be used to obtain the distance itself, by normalizing the gradient and recovering the corresponding scalar potential. In particular, to compute the distance to a point one can apply the following procedure:

  1. Solve the heat equation , ;

  2. Evaluate the vector field ;

  3. Solve the Poisson equation .

Step 1 is approximated by a single step of backward Euler, i.e., by solving the linear elliptic equation


for a small time step . The Poisson equation in the third step corresponds to minimization of the energy , i.e., it finds the function whose gradient is as close as possible to (in the sense). Note that all of these steps are described as operations in the smooth setting; as with many PDE-based methods, a final algorithm is obtained by picking a particular discretization of the domain and corresponding discrete differential operators. For instance, the heat method has been implemented on triangle meshes, polygonal meshes, point clouds (Crane et al., 2013b), images (Yang and Cohen, 2015), subdivision surfaces (de Goes et al., 2014), tetrahedral meshes (Belyaev and Fayolle, 2015), spline surfaces (Nguyen et al., 2016), and voxelizations (Caissard et al., 2017). A variant of the heat method that replaces the Laplace operator with the connection Laplacian also computes parallel transport of vectors along shortest geodesics (Sharp et al., 2018), as well as the logarithmic map discussed in Sec. 2.1

. An interesting consequence of using higher-order elements of splines and subdivision surfaces is that one can in principal obtain estimates of geodesic distance that are even more accurate than the exact polyhedral distance—see

(Nguyen et al., 2016) and in particular (De Goes et al., 2016), who explore higher-order schemes.

One can use the heat method to connect ideas from the various PDE-based methods considered so far. From the perspective of Sec. 3.3.2, Equation 9 is a screened Poisson equation (where the boundary is just a single point ), and Varadhan’s formula can be viewed as one possible “normalization.” However, as shown in Crane et al. (Crane et al., 2013b, Figure 6), this simple transformation does not produce an accurate distance approximation, motivating the need for a more sophisticated normalization strategy. The heat method can also be given a variational interpretation which connects it to fast marching methods (Belyaev and Fayolle, 2015). In particular, consider the energy

i.e., the residual of the eikonal equation; the corresponding Euler-Lagrange equations are given by the nonlinear equation

The heat method can therefore be viewed as the first iteration of a fixed point scheme, where the solution to the heat equation provides an initial guess for . Belyaev and Fayolle show that improved accuracy can be achieved by applying successive iterations , and more broadly, by applying other optimization strategies to minimize , albeit at higher computational cost. This point of view provides a clear connection between the heat method and the fast marching method, which can likewise be viewed as a numerical method for minimizing (as discussed in Sec. 3.2.1). Finally, Litman & Bronstein note that the heat kernel is well-approximated via a spectral expansion akin to those seen in Sec. 3.3.1, enabling them to dramatically accelerate the heat method via precomputation of Laplacian eigenvectors (Litman and Bronstein, 2016).

4. Computational Geometry Methods

The methods reviewed in this section are aimed at resolving the boundary problems in the polyhedral setting. We review both exact and approximated methods, by classifying them according to the approach taken to the solution.

4.1. Global methods

Global methods compute globally shortest paths. We distinguish between: methods that work on the whole polyhedral domain and provide an exact solution (Section 4.1.1); approximations of such methods that trade-off accuracy for speed (Section 4.1.2); and methods that work on a discretization of the domain on graphs, thus providing necessarily approximated solutions (Section 4.1.3).

4.1.1. Polyhedral Methods – Exact

Methods for solving the polyhedral geodesic distance problem are built on the piecewise flatness of polyhedral surfaces. This property enables the planar unfolding of triangle strips, which simplifies the computation from 3D polyhedral surfaces to 2D unfolded planes (see Section 2.2). Within the unfolded triangle strips, locally shortest paths are computed as straight line segments. By enumerating all possible locally shortest paths between two points, globally shortest paths can be obtained by finding the ones with minimum length. However, without an efficient pruning strategy, the number of such locally shortest paths grows exponentially with the size of and quickly becomes infeasible to compute (Balasubramanian et al., 2009). Therefore, the key challenge is to remove the computational redundancy as much as possible.

Addressing this challenge, O’Rourke et al.(O’Rourke et al., 1984) first proposed an algorithm to solve the PPGP problem in time, where is the number of vertices on . Their method can be viewed as a theoretical milestone from that it shows the PPGP problem can be solved in polynomial time. However, their method is still too time-consuming for practical applications. Thus, our discussion starts from (Mitchell et al., 1987), which laid the foundation of practical polyhedral geodesic algorithms.

Figure 9. Illustration of window structure. The shortest paths within an unfolded triangle strip can be encoded as a window , where , are the endpoints of ; , are the corresponding distances from , to the pseudosource (saddle vertex); denotes the geodesic distance from to source ; and denotes the propagation direction. Adapted from (Surazhsky et al., 2005; Qin, 2017).

Mitchell et al. (Mitchell et al., 1987) proposed the first practical algorithm for geodesic computation on polyhedral surfaces, which is commonly referred to as the MMP (Mitchell-Mount-Papadimitriou) algorithm. Their main idea can be summarized as the continuous Dijkstra technique, which extends the famous Dijkstra’s algorithm (Dijkstra, 1959) from graphs to polyhedral surfaces. As an analogy, they propose to view edges of polyhedral surfaces as nodes of a graph. However, an edge contains infinite points and its distance cannot be represented by a scalar value. To handle this problem, a dedicated data structure named window111In (Mitchell et al., 1987), Mitchell et al. used the term interval. While in the following works (Surazhsky et al., 2005; Xin and Wang, 2009; Xu et al., 2015; Qin et al., 2016), an alternative term window gains popularity from its intuitiveness. Thus, the term window is employed in this survey. is introduced, which encodes all locally shortest paths in an unfolded triangle strip with a tuple (Figure 9). Then, the polyhedral geodesic distances can be computed by finding the optimal windows on edges of . The optimization of windows on an edge is accomplished by trimming overlapping windows into disjoint ones according to the smaller distance in the overlapping part (Figure 10). More specifically, windows are trimmed in a pairwise manner after being propagated to edges and stored in an ordered list according to their positions.

Figure 10. Illustration of window trimming. (a) Suppose there are two overlapping windows whose unfolded pseudosources are and and their intersection . (b) By solving a quadratic equation, the two windows are trimmed into disjoint ones. An illustration of special case = and then is shown. From (Surazhsky et al., 2005).

It can be observed that MMP’s time cost is positively correlated to the number of windows arriving each edge. To minimize it, the MMP algorithm borrows the wavefront propagation paradigm appearing in Dijkstra’s algorithm and fast marching (Sec. 3.2.1) and propagates windows across from near to far by maintaining a priority queue. Such paradigm ensures the redundant windows will be trimmed at the earliest possible stage. When propagation is finished, each edge of will be subdivided into a list of end-to-end linked windows containing the geodesic distance field to any point on . Mitchell et al. proved that the algorithm creates at most windows. Thus, it can be easily derived that MMP can solve the SSGD problem in time and space, where is the number of vertices of . The key component of their proof is the analysis of the maximal number of windows arrived at each edge, which is . Nevertheless, such analysis is too pessimistic and is inconsistent with MMP’s practical performance.

The MMP algorithm (Mitchell et al., 1987) is commonly viewed as a landmark in the research of polyhedral geodesic algorithms. Its distinct contribution is the window propagation framework, which contains three major components: window propagation, window pruning (e.g. trimming) and window management (e.g. priority queue). This framework is employed by most of the following works (Chen and Han, 1990; Surazhsky et al., 2005; Xin and Wang, 2009; Xu et al., 2015; Qin et al., 2016) and they differ from their unique techniques used in the three components.

Although it is straightforward for “continuous Dijkstra” technique to use a priority queue to manage windows, an extra time cost is introduced, which slows down the computation. Addressing this issue, Chen and Han (Chen and Han, 1990) proposed the CH (Chen-Han) algorithm. Their algorithm manages windows with a First-In-First-Out (FIFO) queue, whose overhead is . For window pruning, they proposed a very simple rule named “one angle, one split” to remove redundant windows around vertices of (Figure 11).

Figure 11. Vertex splits any window spanning it into two halves. The “one angle, one split” rule shows that for any two windows passing through , only three of the four split windows can provide shortest paths and continue propagation. As illustrated, the three green windows are valid and continue propagation. The red one contains no shortest paths beyond triangle and stops propagation. From (Xin and Wang, 2009).

This rule only cares about the redundancy around vertices and is tolerant of overlapping windows. Thus, it is much less powerful than the trimming rule used by the MMP algorithm (Mitchell et al., 1987). However, it can still be proved that the number of windows generated in CH algorithm is at most . This analysis also supports that the worst-case analysis of the MMP algorithm is too pessimistic and that its practical performance should be much better. Since the CH algorithm uses a FIFO queue to manage windows and has a window complexity, its overall time complexity is . In addition, the CH algorithm does not store propagated windows on edges to perform window pruning. Thus, its space cost is .

Theoretically, the CH algorithm achieves the best asymptotic complexities so far. However, its practical performance is poor. Surazhsky et al. (Surazhsky et al., 2005) reported that their implementation of the MMP algorithm runs in sub-quadratic time and is many times faster than Kaneva and O’Rourke’s implementation of the CH algorithm (KANEVA, 2000). Since then, several methods have been proposed to improve the practical performance of the MMP algorithm. Liu et al. (Liu et al., 2007) observed that floating point error may cause the degeneration of window propagations to frequently occur when applying the MMP algorithm to real world models. To make the MMP algorithm more robust, they conducted a systematic analysis on all the degenerated cases and proposed techniques to handle them accordingly. Observing that the half-edge data structure used in Surazhsky et al.’s implementation (Surazhsky et al., 2005) may generate redundant windows, Liu (Liu, 2013) proposed to implement the MMP algorithm with an edge-based data structure. Experimental results show that on average the edge-based version runs 44% faster and uses 29% less memory.

In summary, the MMP algorithm’s practical success comes from its effective wavefront propagation paradigm which enables the removal of redundant windows at the earliest stage. However, the removal of redundant windows is a complex and expensive process which involves inserting a newly propagated window into a ordered list and trimming overlapping parts by solving quadratic equations.

Addressing this issue, Xin and Wang (Xin and Wang, 2009) proposed the ICH (Improved Chen-Han) algorithm, which combines the advantages of both the MMP algorithm and the CH algorithm. From the MMP algorithm, they borrowed the wavefront propagation paradigm and used a priority queue to manage window propagations according to their distances. To avoid the MMP’s costly window trimming operations, they borrowed the “one angle, one split” rule from the CH algorithm and extended it into three novel window filtering rules (Figure 12).

Figure 12. Illustration of ICH’s window filtering rules. The key idea is to filter out redundant windows with the minimum-so-far distance of vertices. Left: and its symmetric counterpart ; Right: . From (Xin and Wang, 2009).

Experimental results show that these new rules help to remove more than 99% redundant windows during propagation. Theoretically, introducing the priority queue increases ICH’s time complexity to and makes its space complexity no longer . However, its practical performance increased dramatically. Experimental results show that ICH greatly outperforms the original CH algorithm. In addition, although ICH usually propagates more windows, it runs comparable to the MMP algorithm (Surazhsky et al., 2005) while using considerably less space. This reveals that there exists a trade-off between the effectiveness of window pruning strategies and their costs.

One prominent difference between MMP and ICH’s window pruning rules is that: MMP’s trimming rule involves interaction between windows while ICH’s “one angle, one split” and window filtering rules do not. More specifically, ICH filters redundant windows by distance comparison with the minimum-so-far distances at vertices, which is a relatively independent process and is ideal for parallelization. However, there is still one obstacle. To remove redundant windows at the earliest stage, ICH organizes window propagations in a strict order with a priority queue, which is sequential. Fortunately, since the correctness of window propagation algorithms is independent of the order of window propagations, this strategy can be loosened for parallelization. Based on the above observations, Ying et al. (Ying et al., 2014) proposed the PCH (Parallel Chen-Han) algorithm, which is a parallel version of the ICH algorithm. In their algorithm, nearest windows are selected and propagated in parallel at each iteration, where is a user-specified parameter (Figure 13).

Figure 13. The PCH algorithm selects nearest windows and assigns them to different GPU threads. Each thread propagates the selected windows and filters the redundant ones independently. Then, the newly created windows are collected and re-organized in the memory pool. From (Ying et al., 2014).

Then, the propagated windows are filtered in parallel according to the minimum-so-far distances at vertices. To avoid data conflict at vertices, the distance updates are delayed until the propagation of selected windows finished. Experiment results show that PCH propagates slightly more windows than the ICH algorithm, but runs an order of magnitude faster.

The performance of the PCH algorithm shows that slightly relaxing the order of propagation will not have a big impact on the window pruning effectiveness. Based on this spirit, (Xu et al., 2015) proposed the FWP (Fast Wavefront Propagation) technique to accelerate the MMP and ICH algorithm by replacing the strictly ordered priority queue with a loosely-ordered bucket-based FIFO queue. More specifically, windows with similar distances are stored in the same bucket and propagated in a FIFO order. Although straightforward, their method performs well in practice from the observation that both MMP and ICH spend roughly 70% time on maintaining the priority queue (Figure 14).

Figure 14. The running time breakdown of the MMP algorithm. Note that the maintenance of priority queue is the most time-consuming part which occupies around 70% of the running time. The ICH algorithm has a similar percentage. From (Xu et al., 2015).

Experimental results show that their FWP-MMP algorithms runs 3-10 times faster than the MMP algorithm, and their FWP-CH algorithm runs 2-5 times faster than the ICH algorithm.

Both PCH and FWP employ a batch processing strategy to accelerate existing algorithms. However, their selection of window batches depends only on the distances and lacks the consideration of their geometric interrelationship. Observing that the prunings of all propagated windows within a triangle are inter-dependent, Qin et al. (Qin et al., 2016) proposed the VTP (Vertex-oriented Triangle Propagation) algorithm. Their algorithm organizes window propagations with a triangle-oriented growing scheme. In this scheme, a traversed area is defined as the union of all visited triangles. The boundary of this traversed area forms the propagation wavefront, which contains all the windows to be propagated. Then, this traversed area is expanded in a Dijkstra-like style by gradually enclosing adjacent triangles (Figure 15). Their algorithm terminates when the traversed area covers the entire . During expansion, local windows entering the same triangle from the same edge are organized in a batch and they are propagated simultaneously. In such batches, redundancy checks can be intensively performed between any pair of windows. To remove a maximal number of windows in a low cost way, they proposed three rules which are summarized from an exhaustive list of scenarios for pairwise window pruning inside a triangle. Note that these pruning scenarios are listed under the assumption that the window trimming technique (Surazhsky et al., 2005) is not used due to its relatively high computational cost of solving quadratic equations. Experimental results show that their algorithm runs 4-15 times faster than MMP and ICH algorithms, 2-4 times faster than FWP-MMP and FWP-CH algorithms, and also consumes the least memory, which ranks it as the best performing polyhedral geodesic algorithm to date.

Figure 15. Illustration of triangle-oriented growing scheme. The black and red line segments denote the boundary of the traversed area. The green line segments denote the triangle to be added into the traversed area. Left: The triangles in vertex ’s 1-ring neighbourhood is adding into the traversed area. The red line segments become interior edges. Thus, the windows on them are propagated through the traversed area until they reach the updated boundary edges. Right: is the traversed area before growing. After growing, will be merged into . From (Qin et al., 2016).

Remark: Most polyhedral geodesic algorithms aim at solving the SSGD problem, while Balasubramanian et al. (Balasubramanian et al., 2009) proposed an algorithm to solve the APGD problem. The main idea of their algorithm is to build a vertex-to-vertex graph by computing the minimal-geodesic distances between all pairs of vertices on . The minimal-geodesic distance is the length of the shortest path between two vertices under the condition that the path contains no intermediate vertices. Then, the shortest distances between any pair of vertex can be computed by searching the vertex-to-vertex graph using standard algorithms like the Dijkstra’s algorithm (Dijkstra, 1959). To compute the minimal-geodesic distance, they employed a triangle chain flattening method and proposed to reduce the redundancy through visibility. However, the overall method is essentially the same as the standard window propagation algorithms without any pruning. Thus, its practical performance is expected to be much worse than other state-of-the-art geodesic algorithms. Note that in (Balasubramanian et al., 2009), the runtime of Surazhsky et al.’s (Surazhsky et al., 2005) MMP implementation is estimated but not tested by experiments. Thus, the comparison may be inaccurate.

Shortest Path Construction.

All the reviewed methods employed a straightforward backtracing strategy to construct geodesic paths, thus supporting SSSP, too. In general, there are two related approaches to answer two queries respectively, which can be easily adapted to all polyhedral methods:

  • Shortest paths to vertices: As Figure 16 shows, for a vertex whose shortest distance is obtained from window , let be the image of ’s pseudosource on the plane defined by and . Then, the entering point on edge can be computed easily by intersecting and edge . The entire path can be constructed by backtracing in a similar way. Note that the image for each vertex of can be recorded during window propagation and do not require any extra data structure, which is space-efficient.

  • Shortest paths to generic points: As Figure 17 shows, propagated windows must be stored on edges of as the data structure supporting geodesic path queries to points. For an interior point of a triangle, the window containing its shortest path can be computed by , where is a set containing all windows on the three edges of the triangle, is a point in , is the geodesic distance of . Then, the shortest path of can be computed by backtracing from to in until reaching the source. Note that this approach can be space-consuming since it requires storing all propagated windows on edges.

Figure 16. Illustration of backtracing from a vertex of . From (Xin and Wang, 2009).
Figure 17. Illustration of backtracing from a point of . In this scenario, propagated windows are required to be stored on edges. However, they can be organized in different ways. Left: windows on edges are trimmed into non-overlapping ones as the MMP algorithm do (Surazhsky et al., 2005). Right: redundant windows on edges are filtered as the ICH, VTP algorithms do (Xin and Wang, 2009; Qin et al., 2016). The remaining windows may overlap with each other. From (Xin and Wang, 2009).

4.1.2. Polyhedral Methods – Approximated

Methods for exactly solving polyhedral geodesic problems can be time-consuming for large-scale applications. Thus, some methods approximate the polyhedral distance, which can be applied in scenarios insensitive to accuracy.

To solve the SSGD problem efficiently, Surazhsky et al. (Surazhsky et al., 2005) proposed an approximated version of the MMP algorithm to reduce the time and memory costs. This algorithm works just as the MMP algorithm. The only difference is that: in the approximated version, the algorithm tries to merge a window with adjacent windows on the same edge before every propagation step (Figure 18). To make sure that the merged window is valid and with bounded error, the two windows to be merged must satisfy five conditions:

Figure 18. Merging two windows into a new window. is the pseudosource of the new window, which covers the visible regions of both merged windows. From (Surazhsky et al., 2005)
  • Directionality. The two windows must have the same propagation direction with respect to the edge.

  • Visibility. To guarantee that the approximated distance field have no gaps, the new window must cover the visible regions of the merged windows.

  • Continuity. The distance field along an edge must be continuous. Thus, the distances at the endpoints of the new window must be the same as the corresponding ones of the merged windows.

  • Accuracy. The algorithm bounds the local error of each merging step by a user-specified threshold.

  • Monotonicity. To guarantee the algorithm’s termination, the new window must have larger distances than the merged windows. However, the authors did not encounter any infinite loops in practice and thus they did not perform the corresponding checks in their implementation.

In their experiments, the local error threshold is set to be 10%. The test results show that the merging operation is very effective and reduces the WPE (Window Per Edge) to a low number slightly larger than 1. Thanks to it, their approximated MMP algorithm runs in time and outperforms the Fast Marching method (Kimmel and Sethian, 1998) (see Section 3) in both running time and accuracy.

To solve the APGD problem efficiently, Xin et al. (Xin et al., 2012b) proposed an algorithm which uses the ICH algorithm (Xin and Wang, 2009) as a subroutine. The main idea is to embed pre-computed geodesic triangles in so that the geodesic distance between two points can be approximated by the Euclidean distance. Their algorithm contains two steps:

  • Pre-processing step. In this step, points are first sampled on . Then, the ICH algorithm (Xin and Wang, 2009) is used to compute the geodesic distance field on with the samples as sources. Utilizing the computed distance field, Delaunay triangulation is applied on according to polyhedral geodesic metric (Figure 19). Finally, the distances between each pair of the sample points, together with the distances between each mesh vertex and the vertices of the geodesic triangle containing it, are saved for later use.

  • Query step. For any two points on , the two geodesic triangles containing them are first found. Then, the two triangles together with the two points are unfolded onto while preserving the geodesic edge lengths. After the unfolding, the geodesic distance between the two points is computed as the Euclidean distance between them.

Figure 19. The Delaunay triangulation of a mesh according to polyhedral geodesic metric. From (Xin et al., 2012b)

From the time cost of the ICH algorithm, it can be easily concluded that the pre-processing step consumes time. After it, the distance query between any two points on can be answered in time. Intuitively, there is a trade-off between accuracy and computational cost in this algorithm. That is, the more sample points, the slower the pre-processing and the larger the space requirements, but more accurate.

Shortest Path Construction.

The approximate MMP algorithm (Surazhsky et al., 2005) employed a similar backtracing strategy as the original MMP method with only one key difference. Since the windows in it are merged, the positions of their pseudosources are inaccurate hence not eligible for backtracing. Thus, the authors proposed to maintain a list of the successively merged windows and use the original pseudosources for backtracing, which yields a more accurate result. Xin et al. (Xin et al., 2012b) employed a quite different strategy to construct shortest paths: since the data structure of their method contains only geodesic distances but not windows, they are only equipped with an approximated shortest distance field. Thus, they proposed to construct shortest paths using a gradient descent strategy similar to the ones used in (Kimmel and Sethian, 1998; Crane et al., 2013b) (see Section 3).

4.1.3. Graph-based Methods

Graph-based methods rely on the assumption that the shortest geodesic distance/path between any pair of points and can be approximated with a chain of shortest distances/paths , where belong to a finite set of points of such that the shortest distance/path between pairs of points of is precomputed and encoded in the edges of a graph . Methods differ for the choice of points of end edges of and the strategy to build graph . Once is given, the PPGP and SSGD problems are easily resolved through shortest path queries on , most frequently with standard Dijkstra search (Dijkstra, 1959). Several methods in this class provide data structures that can also support efficient solutions to SSSP or APGD/APSP.

Figure 20. A shortest path (black line) and a corresponding distance field (color map) computed on the edge graph (left); and the exact solution (right). From (Campen et al., 2013).

The quest for graph-based methods probably started with the PhD thesis of Lanthier (Lanthier, 1999). The bottom line comes from the observation that the shortest geodesic path between a pair of vertices of mesh can be approximated by a shortest path on the graph of edges of . This approximation, however, turns out to be poor. In fact, such a path is not allowed to cross the faces of , while it is forced to meander about their boundaries, thus resulting in a zigzag walk that most of the times is far more wiggly than the exact geodesic path (see Figure 20 for an example). Starting at this observation, Lanthier et al. (Lanthier et al., 1997, 2001) present three strategies to extend the edge graph to a graph that incorporates paths across the faces of . They build the set of nodes by distributing Steiner points along the edges of , and interconnect them with arcs that walk either along edges, or across faces:

  • The fixed scheme distributes a fixed number of Steiner points uniformly along each edge and defines as the union of all the vertices of and all the Steiner points. Next, for each triangle , it interconnects all the vertices and Steiner points on the boundary of to form a so-called face graph, which connects each node (either a vertex or a Steiner point) with all the nodes belonging to the other two edges of and to its two neighbors along the edge it belongs to. In practice, the face graph of is a complete graph, except for omitting arcs between collinear nodes that are not immediately adjacent along edges of (see Figure 21 left). Graph is then obtained by collecting all the face graphs. Now we known that an exact polyhedral geodesic path is a polyline having its nodes at either vertices or edges of (Mitchell et al., 1987). The segment of one such path traversing a given triangle is approximated by an edge in the face graph of , where and are the closest nodes in the face graph of along the edges that contain and , respectively (see Figure 21 right).

    Figure 21. The face graph of a triangle for the fixed scheme with two Steiner points per edge (left). The portion of shortest path crossing triangle is approximated with edge of (right). From (Lanthier et al., 2001).
  • The interval scheme is similar to the previous one, but Steiner points are distributed at uniform distance along each edge, so that the number of Steiner points per edge depends on edge length. Lanthier at al. show that this scheme can achieve the same accurcay of the previous one with a smaller number of Steiner points, i.e., a more compact graph (see Figure 22).

    Figure 22. Approximated paths computer with the fixed scheme (left) and the interval scheme (right): is the shortest geodesic path, is the approximated path. From (Lanthier et al., 2001).
  • The spanner scheme uses the same Steiner points as the interval scheme, but it builds a more sparse graph . Instead of connecting each node in the face graph to all other visible nodes in the triangle, a predefined set of cones of given width is considered, which emanate from , and is connected to at most one other node per cone (see Figure 23). In this way, graph is guaranteed to have a number of arcs that is at most a fixed multiple of .

Figure 23. Spanner edges from point (thick red lines) for cones of width ; thin red lines bound cones. From (Lanthier et al., 2001).

In all schemes, shortest paths are computed by a Dijkstra search on : SSGD can be resolved with a complete visit of , while for PPGP search can be pruned as soon as distance at the target point becomes definite. SSSP is supported by recording during search the predecessor of each node in the path to the source: in this way, shortest paths can be retrieved by back-tracing from any vertex to the source.

Lanthier et al. (Lanthier et al., 2001) prove that if the number of Steiner points is large enough, they can approximate the geodesic shortest path within an additive bound that is a function of the length of the longest edge in . The interval scheme results more accurate, while the spanner scheme results more compact and faster, at the cost of some loss in accuracy. Theoretical bounds are conservative, though, and reasonable bounds can be guaranteed only with a very large number of Steiner points, which is impractical for the applications. On the other hand, they perform extensive testing of the different schemes on polyhedral terrains and provide empirical evidence that very accurate results are achieved with just an average of 5-6 Steiner points per edge. More recent comparative tests given in (Campen et al., 2013) suggest that the situation may be less favorable if the distance metric is anisotropic: in that case, they report the need of 10-20 Steiner points per edge on average to achieve accurate results. Nevertheless, Lanthier’s techniques remain a reference in the context of graph-based methods, because they join practical effectiveness to ease of implementation.

In (Mata and Mitchell, 1997), Mata and Mitchell propose a scheme that is similar in spirit to the spanner scheme of Lanthier. Differently from Lanthier, they do not add Steiner points, but they enrich the edge graph of with new arcs connecting vertices of , in such a way that for each vertex of a rather uniform set of directions radially arranged about is explored. They build equally spaced cones about and they propagate the boundaries of such cones according to an exact polyhedral geodesic tracing (e.g., as in MMP). Propagation is stopped as soon as the two boundaries of a cone intersect different edges of : this means that there exist a vertex within the cone, which splits it into two; then they determine the shortest path between and and add it as an edge to . See Figure 24. Note that in this case an edge of is not a simple line segment but rather a polyline. Shortest paths in are computed through Dijkstra, as in Lanthier’s method.

Figure 24. A cone determined by two rays (continuous paths) from is split at ; the path from to (dashed) is added as an edge in graph . From (Mata and Mitchell, 1997).

Following the same line of Lanthier’s approach, Aleksandrov et al. (Aleksandrov et al., 1998; Aleksandrov et al., 2000) proposed different techniques to sample Steiner points on edges in geometric progression (i.e., more dense close to the vertices and coarser towards the mid edge). The density – hence the number – of sampled points depends on a user specified parameter and they guarantee that the length of the approximated paths is within a factor of the shortest geodesic path. The same authors, in (Aleksandrov et al., 2005), propose a different strategy that samples Steiner points along the bisectors of triangles, rather than on triangle edges, with a similar geometric progression. They show that this strategy achieves the same approximation factor with a better time complexity.

The results in (Aleksandrov et al., 1998; Aleksandrov et al., 2000, 2005; Mata and Mitchell, 1997) have mostly a theoretical interest, as the number of Steiner vertices necessary to warrant a small is too large for practical purposes. Unfortunately, no experimental results were provided to show whether the sampling techniques proposed in such works can provide better results than the original Lanthier’s techniques in practice.

Schmidt et al. (Schmidt et al., 2006) build a discrete exponential map (DEM) approximation in order to parametrize normal patches of surface. In the context of the problems we analyze here, they are interested in resolving SSGD (not SSSP) within the limited scope of a normal neighborhood of the source. In order to estimate geodesic distances, they rely solely on the edge graph of . Once a shortest path from source to a generic vertex has been found in the edge graph, they consider edges in the path as displacement vectors and they use parallel transport to bring all such vectors to a common frame. Parallel transport is implemented with a pair of rotations that align local frames at the vertices along the path. See Figure 25. Next they estimate the geodesic distance between and as the length of the vectorial sum of all such displacements. The length and orientation of the resulting vector in fact provide the polar coordinates of in the DEM centered at . In practice, this mechanism attempts to “straighten” the wiggly path from the edge graph. Note that, however, they do not compute the straightened path, but only estimate its length. No theoretical analysis of accuracy is provided in (Schmidt et al., 2006), but empirical tests show good results and resiliency to the quality of meshing on reasonably smooth shapes. See Figure 26.

Figure 25. The normal coordinates of the unknown radial geodesic from to in (a) can be approximated using the known geodesics from to and to . The vector to (in normal coordinates at ) is transferred to the tangent plane at using a 2D rotation with angle , producing the red vector in (b). This vector is an approximation to and can be added to (c) to get the approximate result . From (Schmidt et al., 2006).
Figure 26. The DEM with origin at the center of the checkerboard is computed on different discretizations of the sphere (below); sampling rate and mesh quality have small effects on this decal. From (Schmidt et al., 2006).

Campen et al. (Campen et al., 2013) elaborate on the same approach in the presence of an anisotropic metric, aiming at a method to resolve SSGD on the whole surface. They start from the observation that the straightened distance from a path in the edge graph introduces a shortcut that cannot take into account either the presence of holes (boundaries) in the domain, or geometric variations of the surface (bumps). Hence, the straightening approach by Schmidt et al. cannot be accurate outside a normal neighborhood. In order to overcome this limitation, they propose a Short-Term Vector Dijkstra algorithm (STVD) that applies shortcuts locally in the context of a standard Dijkstra search in the edge graph. During the relax phase of Dijkstra algorithm, distance from source to vertex is updated by taking the minimum between the distances of each of the predecessors of along the path, summed to the shortcut, computed by edge unfolding, from to that predecessor. In practice, the vector shortcut of Schmidt et al. is applied just locally, on a sliding window of length that moves along the path. Values of in the range between 5 and 10 are reported in the experiments, with higher values used for higher anisotropies of the input metric. Shortcuts are computed with a technique different from (Schmidt et al., 2006): the edge path is unfolded to the plane by preserving the length of edges, while mapping the angles between pairs of consecutive edges according to the exponential map at the common vertex, computed as in Fig. LABEL:fig:SmoothStructure. Note that this procedure is fully intrinsic, as the rescaling of angles depends only on the total angle about each vertex in . The length of a shortcut is measured by the Euclidean distance between the source and the target in the unfolded path. In the presence of anisotropic distance, that shortcut may be segmented in order to apply different weights in the segments corresponding to different edges. See Figure 27. Campen et al. (Campen et al., 2013) compare this approach on a single example with anisotropic metric, with respect to several other methods, among which FMM, HM, OUM, and Lanthier’s, by taking as reference a solution computed with Lanthier’s method with 200 Steiner vertices per edge (which is assumed to be nearly exact). They report that only Lanthier’s with 10-20 Steiner points per edge beats STDV in terms of accuracy. They also comment that the advantage of STDV is much less evident for an isotropic metric; in particular, they report that for low anisotropies, the FMM applied to the intrinsic Delaunay triangulation of a subdivided version of the input mesh is very competitive; no experiment is shown for this case.

Figure 27. Unfolding of edge chains to the plane. Edge lengths and relative 1-ring angles are preserved. The sum vector

(red) is then subdivided according to the orthoprojection of the individual edges. The resulting portions are measured by the respective norms ? here visualized as tensor ellipses (blue) ? and their lengths summed to get the weight of

. From (Campen et al., 2013).

Ying et al. (Ying et al., 2013) start from a basic fact stated in (Mitchell et al., 1987): each exact (polyhedral) geodesic path is a polyline having its internal joints either on edges, or at saddle vertices of (a saddle vertex is a vertex having negative Gaussian curvature - see Section 2.2). A path is said to be direct if it does not contain any intermediate saddle vertex. On the basis of this observation, they build the Saddle Vertex Graph (SVG) having the vertices of as set of nodes , and one arc in for any direct path between a pair of vertices. Note that each arc in is not a single line segment but a polygonal path. Such paths need not be stored as they can be back-traced by unfolding triangles starting at the destination: only a starting direction per edge needs to be stored. If the SVG contains all direct paths, then the exact shortest geodesic path between any pair of vertices can be found as a shortest path on the SVG. In this respect, the SVG can be considered a data structure to support the APSP problem, as well as all the other boundary problems. In principle, the SVG could be a complete graph though (hence intractable in the applications, at least on large meshes). Ying et al. show empirically that real world models contain a large fraction of saddle vertices (between and ) and most paths are not direct. Nevertheless, a full SVG may still be too large for practical purposes. Ying et al. overcome this limitation by computing a sparsified SVG as follows: for each vertex they consider only a geodesic disc containing other vertices and only direct paths to such vertices contribute to the SVG. In this way, is the maximum degree of nodes in . The SVG is built by running an exact method (ICH/MMP) from each vertex, pruning the search as soon as the first vertices have finalized their distance. Variations of Dijkstra search are discussed in the paper, which resolve different boundary problems after the SVG has been built. They run experiments with values of between 8 and 1000 and they report the mean approximation error of shortest paths from SVG to be about for . The construction of the SVG can be quite time expensive for high values of , but they can achieve reasonable times by using a GPU implementation that computes independently in parallel paths starting at different sources, by sharing just the mesh in read-only mode among the different threads. Concerning query times, Ying et al. compare SVG search with the heat method (with pre-conditioned matrix): they report that the SVG is faster for , comparable for and slower, but more accurate, for larger values of ; they also observe that the SVG is better scalable to large meshes. They also compare SVG to the GTU method (Xin et al., 2012b), reporting that SVG is faster and requires less memory; no comparison about accuracy is reported.

Figure 28. Window propagation in the exact method (left) and in the construction of DGG (right): window is small enough to stop propagation and allow the path from to any vertex in the cone beyond to travel through vertex . From (Wang et al., 2017).

Wang et al. (Wang et al., 2017) define the Discrete Geodesic Graph DGG, which improves over the SVG by allowing approximated paths to go also through non-saddle vertices. They start from the key observation that cones resulting from window propagation in the exact MMP/CH method become progressively narrower. If a cone is long enough, than any direct path from the source to a vertex in the cone can be approximated by the sum of two shortest paths and , where is a vertex preceding and lying on the boundary of the cone. See figure 28. In particular, they show that if the geodesic distance between and is larger than a certain threshold, which depends on the parameters defining a window that bounds the cone and on a tolerance threshold , then the approximation error is within a multiplicative factor. So, while generating direct paths as in the SVG, they set an early termination of window propagation when the length of the cone reaches the above threshold. Note that a direct path may be eventually approximated with a chain of paths through non-saddle vertices, therefore the approximation rate of a generic path extracted from the DGG will not be within , but it remains in any case. The DGG is built through two fundamental steps, for which detailed pseudo-code is provided in (Wang et al., 2017): first the standard MMP window propagation, with early termination as described above, is run from all vertices; an arc in the DGG is generated for each vertex reached by a window; next, the resulting set of candidate arcs is pruned by deleting all arcs that can be approximated with sequences of other arcs in the graph within the given tolerance; this latter step is performed by running a standard Dijkstra search from each vertex. For anisotropic meshes, in which the fan of arcs incident at some vertices might have gaps, an additional step is run to generate additional arcs (see (Wang et al., 2017)

for details). Shortest path queries on the DGG are resolved with a SLF/LLL heuristic

(Bertsekas, 1998), combined with a technique that restricts the neighbors to visit during the relax phase, based on the fact that the angle between consecutive edges in a shortest path cannot be smaller than (see (Wang et al., 2017) for details). The latter pruning technique is reminiscent of an analogous technique presented in (Aleksandrov et al., 2000), based on Snell’s law, for the case of weighted distances. Extensive experiments are presented in (Wang et al., 2017) on meshes up to 3M vertices and with accuracies up to ; they make extensive comparison with the SVG: DGG beats it in terms of space occupancy, construction time, query time and accuracy, thus resulting globally better on all experiments. They also compare to other methods, including HM on an intrinsic Delaunay triangulation and FWP, which is taken as a reference: DGG beats all such methods in terms of time performance and all approximated methods in terms of accuracy, resulting the best graph-based algorithm to date.

Aiello et al. (Aiello et al., 2015) adopt a hierarchical approach to support the computation of geodesic distances (not explicit paths) between any pair of vertices. Similarly to SVG and DGG, they aim at building a graph that contains many shortcuts between far vertices. They start from a hierarchical partition of the surface into patches: the uppermost level of the hierarchy is a Voronoi partition of the input shape, and each lower level is formed by Voronoi partitions of the patches in their upper level. Then a hierarchical graph is built, which contains all shortcuts among vertices on the boundary of each patch (i.e., one complete graph of boundary vertices per patch) and a complete graph per patch only at the lowest level of the hierarchy (i.e., both boundary and internal vertices of the patch are connected via shortcuts). The shortcuts in the graph are computed with an exact method (in the ICH/MMP family). The distance between a pair of vertices is found by a Dijkstra search that is pruned by visiting the graph in a hierarchical manner: search starts from the patch containing the source at the lowest level of the hierarchy, until it meet its boundary; then it traverses either its sibling patch beyond the boundary, or shortcuts from patches in the upper levels, until a patch containing the target is found; the path is concluded by moving down the hierarchy until the patch at the lowest level, which contains the target, is found. See (Aiello et al., 2015) for details. The authors report quite slow preprocessing times for building the graph, but fast performances and empirically good approximations during queries.

4.2. Local Methods

Local methods start from an initial path connecting two endpoints and aim to refine it to a geodesic path. These methods are usually iterative, and work either updating one vertex at a time (Sec. 4.2.2) or the whole path at each iteration (Sec. 4.2.3). Point-to-point problems of this kind (PPGP) could in principle be solved as a byproduct of global methods that strive for SSGP (Sec. 4.1). Nevertheless, there is a number of algorithms that are specifically tailored for it. One good reason to prefer local methods to compute PPGP is that they are in general faster, easier to code, and require less memory due to the smaller domain they consider. On the negative side, they only converge to a local minimum, producing a locally optimal geodesic path at best. It follows that their ability to find the global optimum depends on the initialization, which may come from the Dijkstra’s algorithm, from heuristics, or from user interaction.

For all methods, the input is a polyhedral mesh and a piece-wise linear path connecting a source with a destination . The computational domain amounts to all the mesh vertices, edges and faces traversed by . The scalability of local methods does not depend on mesh size, but rather on the number of elements traversed by , which in turn depends on its length and the local mesh density. A particular instance of the problem occurs when source and destination coincide (i.e., the path is a closed loop). Most of the local methods natively support this special case, but there are also methods which are specifically designed to shrink closed loops on a mesh (Xin et al., 2012a). Shrinking loops is useful in a number of scenarios, for example to refine previously computed homology bases or the set of handles and loops of a mesh (see (Dey et al., 2013) and references therein).

4.2.1. Local assessment of discrete geodesic paths

Key to many methods is the ability to locally assess whether a path is a geodesic or not. To this end, the most important thing to remember is that while in the continuous the concepts of straightest and shortest paths coincide, in the discrete setting the two concepts are not always equivalent (Sec. 2.2.1). Polthier and Schmies (Polthier and Schmies, 1998) observed that a path passing through a vertex divides its total angle into two components ( and ) and defined geodesic paths in terms of these quantities (Fig. 29). In particular, they state that a path is:

  • locally straightest, if

  • locally shortest, if and

One can easily observe that if the path traverses a flat area or an edge (i.e., if is ) the two definitions above are equivalent, hence the parallel with the smooth theory still holds. Considering non euclidean vertices, we obtain that: (i) a straightest path passing through a spherical vertex cannot be locally shortest (if , then and cannot be both greater or equal than ); (ii) there are infinite shortest paths passing through a hyperbolic vertex (any solution of , with and defines a shortest path). Motivated by the problems of non existence and non uniqueness of locally shortest paths, Polthier and Schmies advocate the use of locally straightest paths as a way to trace geodesics on discrete surfaces, and propose both Euler and Runge-Kutta integration schemes based on them (Polthier and Schmies, 1998, 1999). The beauty of locally straightest geodesics is that they exist and are uniquely defined at any point of a polyhedral mesh. However, there is a price to pay for this. In particular, it must be observed that straightest geodesics do not converge to geodesic paths on smooth surfaces under mesh refinement (Lieutier and Thibert, 2009; Kumar et al., 2003), and that locally straightest distances do not satisfy the triangular inequality, therefore the straightest geodesic distance is not a metric (Liu et al., 2017). Last but not least, we remind the reader that these local geodesic criteria apply only to the Euclidean embedding, and do not extend to alternative metrics (e.g., weighted or anisotropic), thus limiting the applicability of the algorithms that are based on them.

Figure 29. Left: the total angle at point is divided into left angle and right angle by the path that traverses it. The path is: (i) locally shortest if and ; (ii) locally straightest if . Right: three different paths connecting pairs of vertices in a cuboid. The path passing through the spherical vertex is locally straightest, but cannot be locally shortest as its total angle is less than . The path on a facet and the one crossing an edge are both locally shortest and straightest, because . Right image from (Polthier and Schmies, 1998).

4.2.2. Iterative point update

Early methods to transform a general path into a geodesic path are heavily based on the local geodesic criteria introduced in Sec. 4.2.1. Given a path , these methods work by iteratively flattening the mesh around each point , and updating its position in order to locally straighten or shorten the path. The whole procedure is repeated until convergence, that is, until each point locally satisfies its geodesic criterion. Since at each iteration only one point is updated, these methods tend to require a high number of iterations to reach convergence. However, the computational cost of each iteration is usually low, as both the local flattening and the vertex update do not involve time consuming operations.

Figure 30. The local operators used in (Wang, 2004) to locally shorten a path crossing: an edge (left column), a spherical vertex (middle column), and a hyperbolic vertex (right column). If straightening the line the path goes outside of the area span by the flattening, the path is forced to pass through the local border (see bottom left corner). Vertex operators are chosen by querying a look-up table indexed according to the vertex type (euclidean, spherical, hyperbolic) and the relation between and .

CyberTape (Wang, 2004) strives for locally shortest geodesics, flattening the mesh around each point and tracing a straight line going from to . If the line crosses new edges or vertices, intersection points are added to the path. Depending on where you are, there are multiple ways to locally shorten a path. Fig. 30 shows all the update operators used by the algorithm to handle points at edges, spherical vertices, and hyperbolic vertices. Corner cases are also handled: if the local update would move the path outside the area spanned by the flattening, it is automatically constrained to adhere to the border of the local domain. This may happen any time the border of the flattening is non convex (see an example at the top right corner of Fig. 30). Vertex operators are chosen by querying a look-up table indexed according to the type (euclidean, spherical, hyperbolic) and the relation between and . The algorithm converges when all the points satisfy the geodesic criterion, which means that the path is locally shortest everywhere. It is interesting to note that, however, the final path may unnecessarily deviate from the input path . This is because local operators move the path away from hyperbolic points even if locally shortest paths may traverse them, thus introducing an unnecessary deviation from . One year later, Martínez et al. (Martínez et al., 2005) published a similar method, which substitutes the lookup table with a more faithful implementation of the ideas of (Polthier and Schmies, 1998). Specifically, the path is not locally updated at hyperbolic vertices if both and are greater than , as the path cannot be locally shortened (in (Wang, 2004) the path was pushed away from the vertex anyways). This allows (Martínez et al., 2005) not only to converge to a path that is locally shortest everywhere, but also to find the one that deviates from the input path only if strictly necessary. Finally, Xin et al. (Xin and Wang, 2007) strive for locally shortest geodesics, but rather than using the local angle criteria expressed in (Polthier and Schmies, 1998), they adopt an equivalent concept based on the Fermat principle, which states that light always follows the shortest optical path. As in (Wang, 2004) and (Martínez et al., 2005), the visibility-based method is guaranteed to converge to a path that is locally shortest everywhere in the sense of (Polthier and Schmies, 1998).

Remark: A subtle problem affecting all the methods presented in this section is that they do not guarantee convergence to the closest local minimum. In other words, among all the locally shortest paths connecting source with destination, the output may not be the path which is closest to the input. Let us consider a path that traverses a spherical vertex in a way that perfectly halves its total angle (i.e., ). A locally shortest paths never traverses spherical vertices, therefore it should move sideways. Depending on which side it moves, the algorithm will converge to a different local minimum. There is no way to locally assess which side is best: algorithms use either consistent (e.g. always left) or randomized choices. In both cases, this may not lead to the closest solution. Since the initial path may traverse many spherical vertices, it is difficult to provide bounds on how much the geodesic path will deviate from it.

4.2.3. Iterative path update

Methods that iteratively move one vertex at a time require many iterations to converge. A way to speed up convergence consists in solving a more complex problem at each iteration, thus requiring fewer iterations. Liu et al. (Liu et al., 2017) address the problem by using constrained optimization over the whole path. In their algorithm, each point is assigned to the mesh edge containing it. For points at vertices, only one of the incident edges is considered. The position of is expressed as a linear combination of the endpoints of

where controls the linear interpolation between the two endpoints. The length of the whole path

can now be expressed as a function of , and minimized by forcing the vector of its partial derivatives to zero. This would in general lead to a linear system, but in order to keep each point on its edge and avoid extrapolation the problem is constrained, making sure that , for . The solution of this optimization problem gives the shortest path from to , constrained on the edges traversed by the original path. Note that unknowns allow the solver to deviate from source and destination. One could in principle hard constrain them, but the authors opted for highly weighted soft constraints followed by snapping, as this ensures better numerical performances from the solver. In order to obtain the optimal path, the problem is solved multiple times, moving each from its current edge to some adjacent edge whenever the corresponding goes to or . The algorithm converges when none of the points wants to move to another edge. Since at each iteration the whole optimal path is re-computed from scratch, this algorithm requires far less iterations if compared to the methods discussed in Sec. 4.2.2. In the paper, the authors show that this algorithm exhibits super-linear convergence, outperforming (Martínez et al., 2005). An additional outcome of this formulation is that the Euclidean distance can be easily substituted with any other metric, such as weighted, or anisotropic (Fig. 31). The same does not hold for local methods based on the criteria of Polthier and Schmies (Polthier and Schmies, 1998), which do not extend to non Euclidean embeddings (Sec. 4.2.1).

Figure 31. The optimization based formulation used in (Liu et al., 2017) allows to generate shortest paths with respect to the Euclidean (left), weighted (middle) and anisotropic (right) metrics. This is the only local method able to support such a variety of different metrics. From (Liu et al., 2017).

4.2.4. Graph-based methods

Graph-based methods can also be designed to work in a local fashion. Similarly to the methods described in Sec. 4.1.3, local graph-based methods find the shortest path connecting two points running Dijkstra’s algorithm on a graph, which is initialized with the vertices and edges of the input mesh, and progressively refined adding Steiner points at mesh edges. Differently from global methods, Kanai and Suzuki (Kanai and Suzuki, 2000) refine the graph only around the initial edge path connecting source to destination, thus producing a smaller graph and leading to a faster and better scalable implementation. Being based on Dijkstra’s algorithm, weighted metrics can be easily incorporated by associating different weights to the edges in the graph. Lanthier et al. (Lanthier et al., 2001) present a local heuristic method to refine a given path to an approximated locally shortest path. Their method is a local variant of the global one presented at the beginning of Sec. 4.1.3, and can be used with both the fixed and the interval scheme. As for the other local methods, the path is restricted to a sub-domain composed of all the mesh faces traversed by the initial path. If the initial path passes through a mesh vertex, the authors propose a heuristic to decide whether the new path should pass it to the left or to the right. As already observed at the end of Sec. 4.2.2, these heuristic choices impact the output result, possibly resulting in a geodesic line that converged to a bad local minimum. Also exact algorithms can be adapted to the PPGP problem. In the second part of their article, Surazhsky et al. (Surazhsky et al., 2005) proposed a local variant of their MMP implementation optimized to compute exact point to point shortest paths connecting a source and a target . This variant is based upon an aggressive pruning strategy, which avoids propagating windows that have at least one point not satisfying the inequality

where are the lower bounds on the distances and measured on the mesh, respectively, while is the upper bound on the path length, obtained with Dijkstra. The algorithm works in two steps: at the first step the pruning strategy is used to obtain a minimum amount on windows; in the second step the exact algorithm uses the windows to compute an exact shortest path between and (details in Section 4.1.1) . As observed by the authors, alternative pruning schemes could potentially be used, possibly providing more performing implementations of point to point MMP.

5. Methods for Geodesics Tracing

In GT, the problem of computing a geodesic curve on a domain is formulated as an initial value problem. As discussed in Sec. 2.1, a curve on a surface patch is a geodesic if its geodesic curvature is zero everywhere. Geodesic curvature vanishes when its projection on the binormal vector is zero, which in turn means that the curvature vector is parallel with the surface normal of at any point of . Given this premise, tracing the geodesic curve that starts from point and proceeds as straight as possible in direction

amounts to solving a second order ordinary differential equation, subject to the following boundary conditions 

(Polthier and Schmies, 1999)


There is a variety of methods that aim to solve this problem, which mostly differentiate to each other for the type of domain they admit as input. Early works in the field were designed to compute geodesic paths on parametric surfaces, such as NURBS and Bézier patches (Patrikalakis and Bardis, 1989; Maekawa, 1996). This is not surprising, as this was the dominant representation for curved surfaces in the design and manufacturing industry. In this survey we do not provide details on these methods, partly because the article is focused on polyhedral meshes, but also because the reference literature dates back to ’80s and ’90s, and has already been covered in previous surveys and books. We point the reader to the book of Patrikalakis and Maekawa (Patrikalakis and Maekawa, 2009) for further details on the topic.

5.1. Tracing on polyhedral meshes

Restricting to polyhedral surfaces, the topic was pionereed by Polthier and Schmies (Polthier and Schmies, 1998, 1999), who laid the theoretical bases for tracing geodesics on polyhedral surfaces (Sec. 4.2.1). In (Polthier and Schmies, 1998) they propose two alternative methods to integrate a field on a discrete mesh, one based on Euler integration, and the other based on the fourth order Runge-Kutta method. It should be noted that tracing a path by numerical integration on a surface immersed in ambient space requires an extra effort to keep the path on the surface. Some sort of projection method must be devised in order to guarantee it. The concept of straightest geodesics implicitly solves this problem, as the path can be integrated rotating around vertices by some angular distance, thus never leaving the polyhedral surface. This avoids the tedious and error prone implementation of geometric projection (e.g. shooting rays or computing intersections with the surface), resulting in a more robuste software.

Kumar and colleagues (Kumar et al., 2003) observed that the angle criterion proposed by Polthier and Schmies does not take into account surface normals, and suffers when there are abrupt orientation changes between adjacent facets. They proposed an alternative technique to trace a locally straightest geodesic on polyhedral surfaces. Given a point on a surface , and a tracing direction , they define the locally straightest path as the intersection between and the plane passing through and containing both and the surface normal at . The procedure is repeated iteratively, until the path hits the boundary of the mesh, or a certain length is reached. The article also offers a very informative comparison between this strategy, the method of Polthier and Schmies (Polthier and Schmies, 1998), and a state of the art method for geodesic tracing on NURBS. The outcome of their experiments is that: (i) discrete methods suffer from the bias introduced by the tessellation and are consistently less accurate than geodesic tracing on NURBS; (ii) considering the surface normal orientation increases the accuracy of discrete methods. In particular, the authors observed that while for simple developable surfaces all three methods perform equally well and converge on the smooth geodesic curve, for doubly curved non developable surfaces the method proposed in (Kumar et al., 2003) is comparatively closer to NURBS-based methods, whereas the one of Polthier and Schmies (Polthier and Schmies, 1998) converges to a different curve.

Figure 32. Illustration of an iteration of the discrete tracing method described in (Cheng et al., 2016). Left: starting from point and tangent vector , the subsequent point is computed on a Bézier patch (yellow) that interpolates a triangle facet (violet). Point is then projected back on the polyhedral surface (middle-left) and the tangent vector updated (middle-right). Tangent vector is parallel transported from to . The hat is used to distinguish between entities living in the polyhedral mesh (no hat) from entities living in the Bézier patch (angular hat). Image taken from (Cheng et al., 2016).

5.2. Hybrid approaches

In order to fill the gap between continuous and discrete approaches, recent literature proposes hybrid solutions, where the discrete geodesic path is computed on a piece-wise smooth meta mesh, and then projected onto the underlying polyhedral surface. In (Cheng et al., 2016) the authors fit a three sided Bézier patch that interpolates both vertex positions and normals of each facet of a triangular mesh, and use it as a domain to solve the geodesic tracing problem. Patches are glued side by side, guaranteeing continuity across common edges. This ensures that the path can seamlessly travel across mesh edges and vertices. Differently from (Polthier and Schmies, 1998), where path integration is intrinsically linked to the polyhedral surface, the communication between the mesh and the atlas of Bézier patches is provided by shooting rays along surface normals. While this is in general not very robust, the authors handle bad cases by shrinking the integration step size any time the new point does not project onto the same facet as the previous one. The method works by iteratively repeating the following four steps (Fig. 32):

  • Starting from a point and a tangent direction, the next point is computed in the Bézier patch by moving along the tangent direction with a user prescribed step size;

  • The new point is projected on the polyhedral surface;

  • A new tangent vector is numerically computed by solving a first order ODE with the Runge-Kutta method;

  • The new tangent vector is parallel transported from to ;

The algorithm stops when the path hits the boundary of the domain, or when some prescribed path length is reached. Compared to classical tracing methods on parametric surfaces, this method has the advantage to solve a first order ODE on the tangent vector, thus requiring only continuity. Classical methods solve a second order ODE on point positions, and therefore require a continuous domain (Patrikalakis and Maekawa, 2009). Furthermore, when compared to purely discrete methods such as (Polthier and Schmies, 1998, 1999), this hybrid approach avoids the problems about existence and uniqueness of the solution around spherical and hyperbolic vertices (Sec. 4.2.1), and about convergence to the smooth geodesic curve under mesh refinement.

5.3. Tracing streamlines of a vector field

Methods to trace streamlines of a general vector field can also be used for GT. Typically, these methods start from a field defined at the vertices of a discrete mesh, and extend it inside each facet by linear interpolation. For geodesic tracing, the starting point is a distance field, then transformed in a piece-wise linear vector field by computing its gradient (see (Mancinelli et al., 2018) for an overview of different techniques for gradient field computation). Typical tracing techniques are based on numerical integration, performed with Euler or the fourth order Runge-Kutta method (Polthier and Schmies, 1998). These numerical approaches however are problematic, because the approximation error accumulates along the path, possibly producing drifting streamlines that fail to encode the correct global structure of the field. Recent research has shown thath cumulative error propagation can be avoided by integrating the path one facet at a time, and imposing proper constraints along edges. Bhatia and colleagues (Bhatia et al., 2011) introduced the concept of EdgeMaps, replacing classical integration with linear maps defined at the edges of each facet. Edges are segmented into portions where the flow can be either inward or outward. Given an entry point to a facet, the corresponding exit point can be computed by approximating the flow linearly, which means that the curved paths of the vector field are replaced with straight lines that robustly indicate where the flow leaves the facet (Fig. 33). In (Jadhav et al., 2012) it was observed that there exist only 23 possible configurations for a triangle, therefore tracing the flow within a facet can be conveniently reduced to a map lookup. EdgeMaps do not completely substitute numerical integration, which is still used at initialization time to find the split points for edges. Furthermore, edge intervals guarantee no overlap of streamlines within the triangle only if their map is monotonic, which is true only up to numerical precision. Ray and Sokolov (Ray and Sokolov, 2014) introduced the concept of stream meshes, which extends the idea of EdgeMaps. They completely avoid numerical integration, and rather rely on the field direction only along edges, providing an interval pairing system which makes heavy use of arbitrary precision floating point numbers to resolve precision issues. The result is a more robust tracing method, which is guaranteed to avoid crossing and collapse between close-by curves. It should be noted that, by avoiding integration within triangles, the lines may deviate from the guiding field, while still preserving the correct global behavior.

Figure 33. EdgeMaps start from a vector field defined on vertices and linearly interpolated inside each triangle (left), and split triangle edges into slabs where the flow is either incoming or outcoming (middle). The information is eventually transformed into a linear map, which allows to easily pair entry and exit points (right). In (Jadhav et al., 2012) it was shown that for triangles there exist only 23 possible flow configurations, therefore all possible maps can be efficiently encoded into a lookup table.

5.4. Point to Point tracing

Xie et al. (Xie et al., 2013) offer a different view on geodesic tracing, which is employed to parallel transport deformations across surfaces, essentially solving a PPGP problem where the starting point represents the input shape, the final point the target shape, and the geodesic path connecting them realizes a morphing between the two. The authors propose two alternative methods to compute : the shooting method and the straightening method. Both methods are iterative, and generate a sequence of paths that progressively approximate the exact geodesic path connecting and . The shooting method works by shooting a discrete geodesic path with finite number of steps starting from along a random direction , and to iteratively update the shooting direction in order to converge to the target path. Assuming all are parameterized by arc length, the update amounts to computing the vector , parallel transport it from to , and set . The process is repeated until convergence (i.e. until goes to zero). A visual example of the shooting procedure is given in Fig. 34 (top line). The straightening method works by initializing as an arbitrary path connecting and , and iteratively straightening it using a gradient descent approach, minimizing the energy function

The inner product inside the integral is defined using the Riemannian metric of the shape space. Given the gradient , the path is updated as , where is a step size. A visual example of the straightening procedure is given in Fig. 34 (bottom line).

Figure 34. Xie et al. (Xie et al., 2013) propose two iterative methods that use geodesic tracing to solve a PPGP problem. Top line shows the shooting method. Bottom line shows the straightening method. The user chooses the number of sample points the path should be composed of (), and the algorithms iteratively update an initial path (green) to make it converge (blue) to the exact geodesic obtained from analytical expressions (red). All geodesics are traced in the 2D hyperbolic space. Image taken from  (Xie et al., 2013).