Log In Sign Up

Fast Evaluation of Smooth Distance Constraints on Co-Dimensional Geometry

We present a new method for computing a smooth minimum distance function (based on the LogSumExp function) for point clouds, edge meshes, triangle meshes, and combinations of all three. We derive blending weights and a modified Barnes-Hut acceleration approach that ensure our method is conservative (points outside the zero isosurface are guaranteed to be outside the surface), accurate and efficient to evaluate for all the above data types. This, in combination with its ability to smooth sparsely sampled data, like point clouds, enhances typical graphics tasks such as sphere tracing and enables new applications such as direct, co-dimensional rigid body simulation using unprocessed lidar data.


page 1

page 3

page 4

page 5

page 8

page 10

page 12

page 14


Dynamic Polygon Clouds: Representation and Compression for VR/AR

We introduce the polygon cloud, also known as a polygon set or soup, a...

Three Dimensional Reconstruction of Botanical Trees with Simulatable Geometry

We tackle the challenging problem of creating full and accurate three di...

Efficient Graph Field Integrators Meet Point Clouds

We present two new classes of algorithms for efficient field integration...

NeuralODF: Learning Omnidirectional Distance Fields for 3D Shape Representation

In visual computing, 3D geometry is represented in many different forms ...

Differentiable Subdivision Surface Fitting

In this paper, we present a powerful differentiable surface fitting tech...

Theoretical and empirical analysis of a fast algorithm for extracting polygons from signed distance bounds

We investigate an asymptotically fast method for transforming signed dis...

1. Introduction

Distance fields are a fundamental tool in computer graphics. They find use in a wide range of applications, including raymarching and raytracing algorithms, as well as collision detection and resolution schemes. Derivatives of distance fields yield important geometric information, such as normal and curvature data, that can be used for rendering effects or to improve the convergence of optimization schemes. Finally, distance fields abstract away the underlying geometric representation, theoretically enabling algorithms to function on co-dimensional data.

Unfortunately, evaluating and storing an exact distance field is expensive, both in terms of memory and computation. This has motivated a plethora of work on computing, storing and approximating distance fields. From these in-depth explorations and from the properties of the exact distance field, we can construct a picture of the ideal distance field representation. First, any approximate distance field should be conservative with respect to the original field. Overestimating the distance between objects allows interpenetration which leads to, at best, artifacts during rendering and, at worst, catastrophic failure during physics simulations. Second, in rendering, animation and simulation, we often require the derivatives of the distance field. Any practical approximation of a distance field should provide an approximation of these differential quantities as well. Third, geometric data comes in many different forms and an approximate distance field should be applicable to data represented not just as triangles, but as points and line segments as well. Finally, a good distance field approximation should be efficient to evaluate and store. In this paper we present a distance field approximation that satisfies all of the above criteria.

Our approach builds off of smooth minimum functions, which have previously been used in geometry processing, computational fabrication and simulation. Specifically, we start with the well known LogSumExp (Kreisselmeier-Steinhauser) smooth minimum function. The LogSumExp function is guaranteed to provide a conservative bound on the true distance and is differentiable, which satisfies our first two criteria. However, smooth distance functions are not a panacea, as applying this formulation naively to co-dimensional data leads to unacceptable swelling artifacts, a problem for which we provide an efficient method of amelioration. Further we demonstrate that common procedures for accelerating the evaluation of such functions via fast summation break the conservative bounds. Based on this observation we propose a modified Barnes-Hut procedure which both accelerates evaluation of smooth distances while preserving the important conservative property. Taken together the sum of these contributions is the first distance function approximation that is conservative, differentiable, co-dimensional and efficient to compute. We evaluate the efficiency of our method on Thingi10K, a large scale geometry database, and demonstrate its efficacy via a number of co-dimensional rendering and simulation examples.

2. Related Work

The LogSumExp function is commonly used in deep learning (see, e.g., 

(Zhang et al., 2020)

) as a smooth estimate of the maximum of a set of data. Its gradient is the softmax function (which is not a maximum as the name implies, but a smooth estimate of the one-hot argmax of a set of data). LogSumExps can be easily modified to return a smooth minimum distance rather than a maximum (and its gradient is the softmin). Aside from deep learning, LogSumExps also appear in other contexts where smooth approximations to min/max functions are needed: for example, they are known in the engineering literature as the Kreisselmeier-Steinhauser distance 

(Kreisselmeier and Steinhauser, 1979). LogSumExps have also been used recently in computer graphics by Panetta et al. (2017) to smoothly blend between microstructure joints. LogSumExp is just one of a number of smooth distance functions. For instance, the norm function has been used as a smooth distance as well, for smooth blending between implicit surfaces (Wyvill et al., 1999) and computing smooth shells around surfaces (Peng et al., 2004). A similar function can be computed directly from boundary integrals (Belyaev et al., 2013). These functions could act as a drop in replacement for much of our proposed algorithm, but the former function tends to return numerically 0 results for far-away points which makes the zero isosurface ambiguous, and it is unclear if the latter even exhibits the important underestimate property. Not only do LogSumExps satisfy the desired property, but they numerically return inf for far-away points which leaves the zero isosurface unambiguous, and so we choose to construct our method around the LogSumExp function.

Gurumoorthy and Rangarajan (2009)

demonstrated a relationship between the Eikonal equation and the time-independent Schrödinger equation (which is a screened Poisson equation), and used this to derive the LogSumExp function as an approximate solution to the Eikonal equation. They evaluate the LogSumExp using a convolution in the frequency domain, which requires a background grid to compute the FFT and its inverse. Further, their method requires all data points to be snapped to grid vertices. While more efficient than a full evaluation, our method achieves comparable asymptotic performance without a background grid and therefore respects the input geometry.

Sethi et al. (2012) extended this line of work by adding support for edges, but they integrate the exponentiated distance over each edge, which, as we show in Section 3.1, can lead to overestimated distances.

Smooth signed distance functions (SDFs) have recently become popular in machine learning as well. A full accounting is beyond the scope of this paper but see for instance  

Chen and Zhang (2019); Park et al. (2019); Mescheder et al. (2019)

. These approaches encode geometric information in latent vectors to be used at evaluation time, along with an input position in space to evaluate a learned signed distance function. Aside from being unsigned rather than signed, our distance approximation diverges in two important ways from work on Neural SDFs. First, our representation is an augmentation to the exact geometry as a smooth approximation rather than an outright replacement. This means that algorithms that still require the original discrete geometry 

(Li et al., 2020)

for operations such as continuous collision detection can make use of our method. Second, the only preprocessing in our method is building a BVH over the data (which typically takes less than a second) rather than training a neural network.

One particular feature of smooth distance functions is that they can use point data to construct implicit functions whose zero isosurface represents the surface. There are many other methods that accomplish this, such as radial basis functions 

(Carr et al., 2001) and Poisson surface reconstruction (Kazhdan et al., 2006; Kazhdan and Hoppe, 2013). Although these methods are capable of producing very accurate surface reconstructions, they require solving large linear systems, while in our approach the implicit function is readily available. Another surface reconstruction method is the point set surface (Alexa et al., 2003), though instead of obtaining an implicit function, this method finds points on the described surface. Level set methods (Zhao et al., 2001) have also been used to reconstruct surfaces, though this requires a grid and may require prohibitively dense voxels to capture fine detail in the underlying surface.

Collision resolution has long been a difficult problem in physics-based animation and engineering. While no efficient method for deformable SDFs exists, a useful approximation is to use a signed distance function of the undeformed space (McAdams et al., 2011). Mitchell et al. (2015) extended this work by using a non-manifold grid to accurately represent high-frequency and even zero-width features. Recently, Macklin et al. (2020) have used SDFs to represent rigid objects that robustly collide with deformable objects. Further, barrier energies for constrained optimization can be seen as a smooth analog for distance-based constraints. These have seen much success in geometry processing (Smith and Schaefer, 2015) and physics simulation (Li et al., 2020; Ferguson et al., 2021). Previously these approaches were all limited to acting on input triangle mesh geometry. Concurrent work proposes separate extensions to co-dimensional (Li et al., 2021) and medial geometry (Lan et al., 2021). While effective for this particular application, these methods lack the ability to smooth input data (Ferguson et al., 2021) and so cannot, for instance, approximate point-clouds as closed surfaces (Fig. 1) for smooth collision resolution. They also rely on low-level machinery like interval arithmetic to provide conservative distance estimates. Our approach is constructed from the ground up to provide a conservative distance estimate and, because it approximates the true distance function, it is suitable for applications outside of just collision detection.

The key to our method is a carefully designed weighted smooth distance function, combined with a specialized Barnes-Hut approximation (Barnes and Hut, 1986). The Barnes-Hut algorithm was first developed for N-body simulations to reduce computational effort on far-away bodies with negligible contributions to force. Barnes-Hut approximations have seen use in many graphics applications which use rapidly decaying kernels (e.g., (Barill et al., 2018; Alexa et al., 2003)), but as we will show, careful modification is needed to ensure that fast evaluation does not break the conservative bounds of the LogSumExp function.


In this paper we present a new method for computing smooth distance fields on co-dimensional input geometry. We derive blending weights and a modified Barnes-Hut acceleration approach that ensures our method is conservative (points outside the zero isosurface are guaranteed to be outside the surface), accurate, and efficient to evaluate for all the above data types. This, in combination with its ability to smooth sparsely sampled data, like point clouds, enhances typical graphics tasks such as sphere tracing and enables new applications such as direct, co-dimensional rigid body simulation using unprocessed lidar data.

3. Method

Given an input geometry embedded in , the unsigned distance field is defined as


where is a point on the input geometry and is an arbitrary query point.

In our case, the input geometry is represented as a co-dimensional data mesh, . Here is a set of vertices in , and is a set of simplices (we deal with points, edges, and triangles). A simplex consists of a tuple of indices into . For example, a triangle would be represented as where each indexes . Edges can be constructed from triangle indices using, e.g., . Lastly, we denote the dimension of a simplex as and the volume of a simplex as (which is 1 for points). can be omitted for point clouds, but we will use it throughout to keep the notation consistent.

With respect to this discretized input, the distance field computation can be reframed as finding the minimum distance between the query point and all constituent simplicies of :


However, is only , and has discontinuities along the medial axes of the data mesh. Furthermore, computing all pairwise primitive distances has complexity in the number of data primitives, so it is impractical for large meshes. Our goal is to tackle both problems by designing a distance function that is differentiable and can be efficiently approximated.

We begin by converting the true distance to a smooth distance via an application of the LogSumExp smooth minimum function which yields


where , controls the accuracy and smoothness of the approximation and are influence weights for each simplex (which will be discussed in more detail in Section 3.3).

Importantly, this function is differentiable for all finite values of with gradient (with respect to )


and is guaranteed to underestimate the true distance to the input mesh (Appendix A). Intuitively this means that our smooth distance will alert us to crossing the input surface before it happens. This conservative property is crucial for applications such as rendering or collision detection since it ensures that maintaining separation with regards to the smooth distance is sufficient to maintain separation between input shapes (Fig. 2). Ergo, the output of our method will remain usable if downstream applications require the underlying geoemtric representations to be separated.

Figure 2. Points (red) starting outside a shape can be prevented from crossing into it by ensuring their trajectories (dashed line) never cross the isosurface of an unsigned distance field. Non-conservative estimates break this property, allowing interpenetration of the underlying geometry. Such estimates are unusable if downstream algorithmic stages require the geometries to be intersection-free.

3.1. Smooth Distance to a Single Query Point

As a didactic example let’s apply Eq. 3 to a point cloud which we do by setting and . We can directly observe the smoothing effect of (Fig. 3), which can be used to close point clouds. Decreasing alpha produces progressively smoother surface approximations, and surfaces produced with smaller alpha values nest those produced with higher alphas. This nesting is a consequence of the conservative behaviour of the LogSumExp formula.

Figure 3. At low values of , the sharp antlers and face of this deer point cloud become puffy and smooth (top left). Increasing resolves features in the face more clearly (top middle), and the individual points become visible at high (top right). The bottom row shows the transition between different values — higher- surfaces are contained in lower- surfaces.

An interesting property of is that it forms a small negative band around the underlying surface, which gets smaller as increases. Essentially, represents the implicit function of a thin shell around . This property is useful for constraints, since small violations of the constraint will result in a sign change, which a purely unsigned function cannot do and would necessitate a root finding procedure to detect intersections.

All of this taken together means that the naive LogSumExp works well for point cloud geometry.

3.2. Smooth Distances to Co-Dimensional Geometry

A natural extension of Eq. 3 to edge and triangle meshes is to replace the discrete sum over points with a continuous integral (see, e.g., (Sethi et al., 2012)) over the surface:

When discretized, this becomes equivalent to applying Eq. 3 to quadrature points on the mesh, while the become the quadrature weights.

While simple, this formulation will unfortunately break the important conservative property of the LogSumExp function because the quadrature weights will, in general, not be greater than or equal to 1 (Appendix A). Fig. 4 shows examples of the overestimation errors introduced by this approach.

Figure 4. Computing edge distances with LogSumExp and sampled point quadrature (left) creates holes in the isosurface at high . Integrating over the edge (middle) using Gaussian quadrature can remove the holes but will overestimate distance at low . Computing exact distances (right) produces the correct isosurface. The edge and a small offset surface are shown in each, where the underestimate property requires that only the first color interval should be contained in the offset region.

Rather we must return to Eq. 3 and compute the respective ’s to the constituent mesh triangles and edges exactly. This can be accomplished via efficient quadratic programming which we detail in Appendix B. However, using unit valued weights, as we did for points, produces bulging artifacts where primitives connect (Fig. 5). What remains is to compute weight functions that mitigate these effects while simultaneously satisfying our constraint.

Figure 5. Distances tend to concentrate where primitives overlap, creating thin edges (top left) and bumps in the surface (bottom left). Weight functions help mitigate these effects (right), for both edge meshes (a) and triangle meshes (b).

3.3. Weight Functions

Panetta et al. (2017) observe the same concentration artifacts we do when using LogSumExp on edge meshes. They propose fixing it using weighted blending, but their weight calculation relies on knowing the convex hull of the edge mesh neighborhood and blending between smooth and exact distance fields. Unfortunately, it is unclear how to extend this to triangles or apply acceleration schemes like Barnes-Hut. Rather, we propose a different weighting scheme which is fast, local and can be easily adapted to triangles.

We center the design for our per-primitive weight functions around high-accuracy use cases, which corresponds to high values of . The weights must be spatially varying to counteract the local concentration artifacts, and to simplify the function definition, we will define them as functions of the barycentric coordinates of the closest point on to , denoted as . We will proceed by analyzing different cases for the location of the closest point on to , and employ polynomial fitting using the identified fixed points.

First, suppose the closest point on to is at a vertex . We can show that weights inversely proportional to the valence of produce an accurate solution (see Appendix C). A similar argument applies when contains triangles and the closest point is on an edge (or in general, if the closest point is on a face of a simplex in ). When the closest point is on a simplex in (but not on a face), we can show that weights of 1 are sufficient (see Appendix C). This gives us 3 fixed points for edges and 7 fixed points for triangles.

Unfortunately, these weight values are less than 1 on the faces and thus are no longer conservative. We rectify this by first computing unscaled weight functions , and then scaling by a factor so that . Although this inflates the zero isosurface, it does so uniformly, so we do not reintroduce the artifacts we wanted to remove. This error is logarithmic in and becomes negligible at higher values of .

Figure 6. All points ( and ) along a line extending perpendicular to a simplex boundary share a closest point and associated weight function value.

Unfortunately, we still have another, more subtle problem: gradients at simplex boundaries are discontinuous. We want these gradients to be 0 because, once the closest point to on is at the boundary, moving in a direction orthogonal to the boundary does not change the closest point, so the weight function should not change (Fig. 6). Before proceeding, we note that the gradient of is


where is the matrix of linear shape function gradients from finite element analysis, and only depends on the weight function itself. In the case of edges, is a single scalar , so setting the normal derivatives to 0 simply involves at the endpoints of the edge. However, this becomes difficult for triangles, since only relatively high-order polynomials can support this property and those tend to exhibit undesirable oscillatory behaviour. Another approach to solve this problem would be to minimize the Laplacian energy subject to the Dirichlet conditions described before (zero Neumann conditions are the natural boundary conditions of this energy (Stein et al., 2018)), and solve using the finite element method. Unfortunately, this requires a dense mesh for each triangle in , and since we must use a small number of variables to avoid prohibitive memory pressure, finite elements are infeasible for our use case. Even using the boundary element method does not solve this issue since we would require a volumetric mesh to set Dirichlet conditions on the triangle interior. Thus, we simply use low-order polynomials (Fig 7) for triangles without any derivative constraints, and instead aim to tame normal derivatives by reducing the magnitude of .

Figure 7. An example of the unscaled cubic weights, , computed by our method, for a single triangle.

For triangles, our barycentric coordinate vector is , and our gradients are and where represents a 90-degree counterclockwise rotation of . We see that gradients are inversely proportional to triangle area, and are badly behaved for smaller triangles (Fig. 8). To control these gradients, we notice that ’s use in can be interpreted as a unifom scale factor in space, and is equivalent to measuring lengths with the metric where

is the identity matrix. This interpretation does not change

, but it does change the barycentric gradients (e.g., we now have ), which gives us direct control over the problematic gradient term (Fig. 8). We also use this gradient scaling for edges in our implementation, though it is not necessary.

Figure 8. Small meshes produce large weight gradients which become noticeable at triangle boundaries (left); uniformly scaling the ambient space allows us to control the length of these gradients and produce smooth results for meshes of any size (right).

To summarize our design so far, we now have 5 constraints for edge weights (3 value + 2 derivative) and 7 constraints for triangle weights (value constraints only). We have enough constraints to fit a univariate quartic polynomial for edges, and if we duplicate the value constraints on triangle edges, we end up with 10 constraints which is enough to fit a bivariate cubic polynomial. To make the functions well-shaped and to retain our assumptions about high accuracy, we try to spread out these constraints when applicable — for example, constraints are placed at edge and triangle barycenters for edge and triangle weights respectively, and the two triangle edge constraints are evenly spaced along the edges. Combined with the barycentric gradient scaling method, we now have well-behaved weight functions for high .

A key assumption in our design of is that is sufficiently large, but since is a controllable user parameter, this assumption can sometimes be violated, and the weights can overcompensate for concentration. In these cases needs to be further modified to avoid artifacts (Fig. 9). If surpasses some threshold (which at a high level represents a upper bound on — see Section 4.3 for more information on how it is selected), then we can use as-is; if , then we must flatten or attenuate . Experimentally we observe that using a scale factor to define an attenuated weight function , with gradient

, helps minimize these issues. Note that this heuristic does not eliminate the issue for all

but does mitigate it sufficiently for all ranges of used in this paper.

Figure 9. At low values of , the weight function overcompensates and produces creases and other spurious artifacts in this very blobby hexagonal torus (left), so we attenuate it to mitigate these effects (right).

3.4. Generalized Query Primitives

We will briefly discuss how to generalize our query point into a general query primitive , which can also be an edge or a triangle. Just like with points, we can compute the distance to another simplex as using a positive semidefinite quadratic program with linear constraints (see Appendix B). We now also obtain barycentric coordinates for through the argmin, which we will denote as . All formulas in the preceding subsections can be adapted for this generalization by simply replacing with , and using the closest point to on , , when a single point is needed (e.g., in barycentric coordinate gradients). These changes imply that is now with respect to an entire query primitive, but this simply describes a rigid translation of .

3.5. Barnes-Hut Approximation

Although our distance function is smooth and easy to differentiate, it requires evaluation of a distance between every pair of primitives in the most general case. However, its use of exponentially decaying functions enables the application of the well-known Barnes-Hut approximation (Barnes and Hut, 1986) to accelerate evalaution. Barnes-Hut uses a far field approximation to cluster groups of far-away primitives together (typically using a spatial subdivision data structure like an octree or bounding volume hierarchy) and treat them as a single point. The approximation is characterized by the centers of each bounding region, and a user parameter which controls where the far field approximation is employed — see Appendix D for details. At a high level, lower is more accurate, and results in an exact evaluation.

Placing the far-field expansion at the center of mass reduces overall error (Barnes and Hut, 1986) but it can possibly overestimate the true distance, breaking the conservative nature of the LogSumExp smooth distance. A minor modification can reinstate this property – placing the expansion center on the closest point of the cluster region to the query primitive , rather than at the center of mass. When using an octree or BVH, we pick the closest point on the bounding box to the query point. This may increase the error relative to using the center of mass, but it guarantees that the Barnes-Hut estimate only underestimates the exact smooth distance (Fig. 10). Note that this choice does not affect the gradient if the bounding region is convex (which is the case for bounding box hierarchies and octrees), since rigidly translating farther away along the gradient direction does not change the closest point.

Figure 10. The center of the Barnes-Hut approximation (purple) has a significant effect on the sign of the error. The center of mass can potentially be farther away than some data points (orange), which can lead to an overestimate of the true distance. On the other hand, the closest point on the bounding box is guaranteed to be closer than every data point (blue), so the final result will always be an underestimate.

Finding the closest point on a box to a query point is simple, but finding the closest point on a box to a query edge or triangle is more complex, and requires solving a quadratic program. However, this is relatively expensive for what is meant to be a fast check to help reduce computation time, so we instead use an approximation of the problem. Viewing a box as the intersection of 6 halfspaces, we identify which halfspaces the primitive lies completely outside of. If there are 3 such halfspaces, they identify a corner of the box that is closest to the primitive; if there are 2 halfspaces, they identify an edge of the box; and if there is 1 halfspace, it identifies a face of the box. If there are no halfspaces that satisfy this criteria, then we simply return the distance to the box as 0 and force the traversal to visit its children. Although this test makes the Barnes-Hut approximation somewhat more conservative, the computational savings more than make up for it — we noticed over a improvement in performance in our experiments.

Psueodocode summarizing our method so far is given in Algorithm 1 and Algorithm 2.

Inputs : Data mesh , BVH node , query primitive , parameters , ,
Outputs : Sum of exponentials and sum of weighted distance gradients
// has children and
if  ;
  // App. D, Sec. 3.5
       Return far field expansion ;
        // App. D
else if  is a leaf containing  then
        // Sec. 3.4, App. B
        // Sec. 3.3
        // Eq. 3
        // Eq. 4
       Return ;
       Return ;
end if
ALGORITHM 1 Collecting contributions for              collectContributions
Inputs : Data mesh , BVH node , query primitive , parameters , ,
Outputs : Smooth min distance and gradient
  // Avoid divide-by-zero
ALGORITHM 2 Computing smoothMinDist

3.6. Smooth Distance to a Query Mesh

Now that we are equipped with an efficient method for computing , we can combine these distances using LogSumExp to obtain a smooth distance between and a query mesh :


where we have introduced a new accuracy-controlling parameter that is indepenent of the inner . In an exact evaluation (), this strongly resembles the LogSumExp of all the pairwise distances between each and .

A subtle issue with the current formulation is that the distance gradients are with respect to

as a whole, but we need per-vertex gradients as those are the true degrees of freedom. One simple way to do this is to split

equally between the vertices of , and rewriting the summation over vertices and one-ring neighbourhoods gives us:


where is the set of one-ring neighbours of . Then, the gradient with respect to query vertex is


Once again, the convexity of the query and data primitives means that we do not need to modify , since the direction of steepest descent is the same for all vertices.

The summation in Eq. 6 can be easily parallelized, and the gradient computation requires only a small amount of serialization at the end to redistribute gradients onto vertices.

4. Results

4.1. Implementation

We implemented our method on using C++ with Eigen (Guennebaud et al., 2010) and libigl (Jacobson et al., 2018). We designed the implementation so that it could be ported onto the GPU, and to this end, we implemented the BVH traversal algorithm outlined in Algorithm 1 using the stackless method of Hapala et al. (2011). Since GPUs exhibit poor performance for double-precision floating point, most of our computations (particularly vector arithmetic) are conducted in single-precision, while exponential sums are tracked using double-precision to increase the range of usable values. Primitive distances were hand coded if feasible, and were otherwise implemented using a null-space quadratic program solver written using Eigen (edge-triangle and triangle-triange distances). An important aspect of these distance computations is robustness, which becomes particularly important because distances are computed using single-precision floats, and errors in the distance can result in breaking the conservative property. The hand-coded distance functions made extensive use of an algorithm by Kahan which employs the fused-multiply-add instruction to reduce cancellation error (Kahan, 2004), while the quadratic program solver leveraged Eigen’s numerically stable algorithms.

4.2. Sphere Tracing

Sphere tracing is a method to render signed (and unsigned) distance functions (Hart, 1996). We can use sphere tracing to visualize the zero isosurface of with as a single point, which we have done throughout the paper. Since our function only approaches an exact distance function in the limit, it does not exactly satisfy the properties required for sphere tracing. In particular, although the function underestimates the true distance, its value is not a lower bound on the distance to its zero isosurface, which is quite different from the underlying surface at low . We find that this is not a big issue in practice, and the rendered isosurfaces are of high quality without a more complex root finding procedure such as the one proposed by Kalra and Barr (1989). In cases where sphere tracing enters the negative band, we can raymarch backwards to find the zero isosurface, which in our experience has not introduced a significant performance penalty.

Due to the expoentials used in the formulation, a high can result in numerical issues far away from the surface, where becomes inf. As a result, it can be difficult to sphere trace a high- surface. We circumvent this issue by using a falloff technique, by reducing by a factor of 10 and re-evaluating until the result is non-inf.

Since we obtain surface normals through distance gradients, we can also use these normals to render surfaces using material capture, or MatCaps (Sloan et al., 2001). A showcase of a variety of renders is shown in Fig. 11, and a summary of performance statistics is given in Table 1. The sphere trace results were run on a 2015 MacBook Pro using 4 cores. We notice that point renders tend to be much faster than the other two types, even on much larger geometry — we expect this to be a consequence of the BVH structure. Since point BVHs will never have overlapping subtrees, Barnes-Hut is better able to employ the far field approximation on more nodes than the other two. These results are confirmed in Fig. 13.

Figure 11. A showcase of various MatCap-rendered isosurfaces. We can render a lidar point cloud (left), a bowl of edges (middle), and a bucky ball triangle mesh (right).
Name Type Time (s)
Hex Torus Edges 72 2 7.25626
Net Edges 156 80 3.42445
Bowl Edges 736 50 15.9481
Torus Tris 1152 300 6.05019
Bunny Tris 6966 1200 16.9601
Bucky Ball Tris 12548 1500 84.3932
Deer Points 165277 1 7.38617
Deer Points 165277 2 8.64774
Deer Points 165277 4 11.962
Lidar Points 6591087 100 11.6792
Table 1. A summary of sphere tracing performance results of renders appearing in the paper, along with the mesh size ( for point clouds) and . All examples use and except for the hex torus which uses . The underling distribution of geometry and the resulting BVH layout has a significant impact on performance, and as a result, point renders tend to perform the best. This corresponds to the results of Fig. 13.

4.3. Parameter Analysis

To demonstrate the effectiveness of Barnes-Hut, we conduct an ablation study on . The approximation becomes increasingly inaccurate farther away from the surface, but since we are only concerned with the zero isosurface, we use sphere tracing as a sampling technique. Using the Stanford bunny mesh’s vertices, edges, and faces, with , , and between 0 and 1, we measure the amount of time taken to render a image, as well as the distance along each ray between the approximation’s estimate of the isosurface and the actual isosurface (). The results are shown in Fig. 12, using 4 threads on a 2015 MacBook Pro. We see that running time decreases by an order of magnitude even for in all three cases, and all renders take less than 10s at . The error relative to the bounding box diagonal also remains below 4% for these values of . As a result, we use in all of our examples in the paper unless otherwise stated. We find that can be increased with low error for higher values of (or equivalently, meshes with lower sampling density), but this is not necessary to obtain significant speedups.

Figure 12. Sphere tracing the Stanford bunny’s smooth distance function over its vertices (green), edges (orange), and faces (purple) obtains order-of-magnitude speedups (top) even for small , while achieving low isosurface approximation error relative to the bounding box diagonal (bottom).

The remaining parameters and are associated with the geometry of . In cases where we want a function resembling our underlying geometry, we want to select an that is high enough to produce a good approximation of the surface, but low enough to prevent numerical problems. Since we can interpret as a metric, we can select based on the density of our geometry. One heuristic we found to be a useful starting point for edge and triangle meshes is to set to be the reciprocal of the minimum edge length. For points, we set to be 100 times the length of the bounding box diagonal. These results can be refined by sphere tracing the zero isosurface and tweaking the results until the surface looks satisfactory. is essentially an upper bound on and can be determined using this same procedure, which allows it to be used in low- scenarios as well. is very similar to but is related to the geometry of , and can be selected using the same process. A fully automated solution for selecting these parameters is left as future work.

4.4. Performance Benchmark

We performed a large-scale performance evaluation of our method on the Thingi10K dataset (Zhou and Jacobson, 2016). For each model, we created 3 data meshes: all of the model’s vertices , all of its edges , and all of its triangles . We then scaled and translated each of these meshes so that their bounding boxes were centered at , with a bounding box diagonal of 0.5. Then, we evaluated from each mesh to each of the voxel centers of a grid between and , in parallel using 16 threads. The results are reported in Fig. 13. We can see that performance is proportional to the number of visited leaves (i.e., the number of primitive distance calculations and far field expansions employed), and the percentage of visited leaves drops significantly as meshes increase in size. These results show that our method is quite scalable, and is very good at handling large meshes. Just like in Section 4.2

, points tend to perform much better than edges and triangles, but now we can see why this is the case. Point clouds have lower variances than edge meshes and triangle meshes in their average leaves visited percentage — for example, even at at 1000 edges/triangles, several meshes require the traversal to visit well over 10% of the leaves in their BVH on average. Also, points generally visit far fewer leaves — point cloud tests visit at most 70 leaves, while edge meshes and triangle meshes can require over 2000 visited leaves. As we can see from the first graph, less leaves visited corresponds to faster query times, and so points are empirically faster than the other two types of meshes.

We also ran this benchmark on a GPU — see Appendix E for the results.

Figure 13. Results of the Thingi10K benchmark, using the vertices , edges , and face of each mesh in the dataset to evaluate queries on a voxel grid. The top row shows that running time is linearly proportional to the number of visited leaves for all mesh types, and the percentage of visited leaves drops sharply as mesh size increases. Furthermore, points tend to perform much better than edges and triangles.

4.5. Rigid Body Collisions

One popular application of distance functions is in collision resolution, where they can be used as an intersection-free optimization constraint (Macklin et al., 2020; Li et al., 2020). While past work has used custom solvers to compute distances and find intersection-free trajectories, we show that our function is sufficiently well-behaved to work well in a generic solver. To focus on the distance function constraint rather than the mechanics, we simulate frictionless rigid body contact, using a variety of configurations and a variety of geometry. The solver is powered by Matlab (MathWorks Inc., 2021), where we use its fmincon solver with the default convergence criteria. The rigid body integrators are implemented in pure Matlab code, while the distance function and (per query mesh vertex) gradients are implemented as a MEX plugin, which is written in C++ and runs on 16 CPU threads, and is called from Matlab. As mentioned in Section 3.1, the negative band of enables us to use our constraint without the need for root finding techniques like continuous collision detection, as Li et al. (2020) needs to do for their purely unsigned distances. The cost is that our time steps are fairly small, but it simplifies the demonstrations.

First, we show two rigid bunnies colliding together and falling onto a bumpy ground plane as a simple test (Fig. 14). The two bunnies bounce off each other and fall down, without intersections. Note that we have 3 separate distance constraints (bunny-bunny and two bunny-plane constraints), which we combine with yet another unweighted LogSumExp into a single constraint, using the maximum associated with the three meshes. Now we depart from traditional examples and simulate co-dimensional geometry. We can simulate a faceted icosphere falling into a net-like bowl (Fig. 15), where it bounces off one of the wires, rolls in the bowl, and is then flung out. Both of these meshes are represented as edge meshes in the simulation, and we see again that there is no interpenetration. In a more complex scenario, we can mix primitive types in a mesh and still compute distances. We can take a sphere mesh, attach some spikes represented as edges to it, and then simulate this shape falling into the net bowl (Fig. 16). We see one of the spikes hit the lip of the bowl and how the spikes get caught in the holes.

We can go even farther and use our function to simulate cases that have not been well-defined in previous methods, like point cloud collisions. Although recent work has simulated contact with highly co-dimensional geometry (Li et al., 2021; Ferguson et al., 2021), the inflated isosurfaces provided by allow us to close the surface defined by the point cloud without a surface reconstruction preprocess. We demonstrate this by rolling an octocat triangle mesh (Fig. 1) and a point cloud bunny (Fig. 17) down point cloud terrain acquired from a lidar scanner.

Figure 14. Two bunnies colliding and falling onto a bumpy plane, where they skid along the surface.
Figure 15. An icosphere edge mesh is dropped into a hemispherical bowl which is also represented as an edge mesh. It jumps off an edge, rolls around and then spins backward. Note that both meshes are only rendered as triangle meshes, and are represented in the simulation as edge meshes.
Figure 16. A spiky sphere made from a triangle mesh and edges for the spikes, falls into a bowl edge mesh and bounces at the bottom, where the spikes get caught in the holes of the bowl.
Figure 17. A bunny point cloud rolls down a lidar point cloud hill.

We summarize the performance results of these simulations in Table 2. Since we use Matlab’s optimization solver, we only measure the average time taken to measure the distance between the two meshes in each table entry. Each evaluation was run with 16 CPU threads parallelizing over per-primitive distance computations. As a result, particularly large examples like lidar-octocat were quite expensive, since even with parallelization, there were many per-primitive distances per thread to compute.

Type Type Avg. Dist Time (ms) dt (s)
Bowl Edges 736 50 Icosphere Edges 120 50 2.4162 1/200
Bowl Edges 736 50 Spiky Sphere Tris+Edges 966 50 86.329 1/400
Lidar Points 6591087 100 Bunny (large) Points 3485 20 34.584 1/100
Lidar Points 6591087 100 Octocat Tris 37878 1000 2232.9 1/100
Bunny Tris 6966 2000 Bunny Tris 6966 2000 22.399 1/500
Bumpy Plane Tris 800 500 Bunny Tris 6966 2000 530.44 1/500
Table 2. Timing results from the simulations presented in the paper. We give information about the data mesh and query mesh, as well as the average distance evaluation time and the simulation time step size. Note that the bunny in the bunny-lidar simulation is 10 times larger than the bunnies in the bunny-bunny simulation.

5. Limitations and Future Work

We have demonstrated a variety of applications of our method, but it is not a panacea. For example, our method tends to be overly conservative in contact resolution, and may need to be augmented with a more exact method to accurately handle tight contacts like a nut screwing into a bolt. Furthermore, it can be difficult to select parameters optimally. Although we can select satisfactorily, and related parameters require some extra care to select in a way that is numerically stable and yields highly accurate solutions. One possibility would be to analyze the distribution of mesh edge lengths to pick a reasonable that is robust to noise in the distribution. Furthermore, our weight functions, while effective, are designed heuristically, and it would be interesting to see if there is a more exact way to define them based on the underlying geometry as well as the connectivity. Another related direction of future work is analyzing the distribution of a point cloud to eliminate concentration artifacts caused by non-uniform point distributions. Although this is not a problem for point clouds that come from most lidar scanners, it is a useful property to ensure the generalizability of our method.

Another interesting avenue for future work is applying our method to deforming meshes in elastodynamics simulations. Due to ’s dependence on the underlying mesh, it would also need to change over time, so it would be interesting to treat as part of the simulation’s evolving state. We also believe that there are interesting applications in purely particle-based simulation methods like spherical particle hydrodynamics that have no background grid to store implicit functions, where our method could be dropped in to render the fluid surface.

The parallelizability of our method also makes it interesting to consider its integration in purely GPU-based applications. In particular, there are many possible locations for parallelization; for example, while our current implementation parallelizes the query primitive distance computations, an alternative approach could parallelize the traversals in each per-query primitive evaluation. We believe analyzing the tradeoffs of these sort of approaches in the style of Halide (Ragan-Kelley et al., 2013) is a promising line of future work.

6. Conclusion

We have presented a smooth distance function that can be efficiently evaluated in such a way that it conservatively estimates the distance to the underlying geometry. Our function works on various types of geometry that can be encountered: points, line segments, and triangles, and utilizes weight functions to eliminate isosurface artifacts. We have benchmarked our method on the Thingi10K dataset and shown that it scales quite well for large geometry. It enables applications such as in rigid body contact, without the need for a special solver, and allows heterogeneous geometry to be simulated without any special handling. We believe this geometric abstraction is very powerful, and due to its basic preprocessing requirements (simply building a BVH), it can provide a lightweight yet versatile augmentation to the underlying geometry.


  • (1)
  • Alexa et al. (2003) Marc Alexa, Johannes Behr, Daniel Cohen-Or, Shachar Fleishman, David Levin, and Claudio T. Silva. 2003. Computing and rendering point set surfaces. IEEE Transactions on visualization and computer graphics 9, 1 (2003), 3–15.
  • Barill et al. (2018) Gavin Barill, Neil G Dickson, Ryan Schmidt, David IW Levin, and Alec Jacobson. 2018. Fast winding numbers for soups and clouds. ACM Transactions on Graphics (TOG) 37, 4 (2018), 1–12.
  • Barnes and Hut (1986) Josh Barnes and Piet Hut. 1986. A hierarchical O (N log N) force-calculation algorithm. nature 324, 6096 (1986), 446–449.
  • Belyaev et al. (2013) Alexander Belyaev, Pierre-Alain Fayolle, and Alexander Pasko. 2013. Signed Lp-distance fields. Computer-Aided Design 45, 2 (2013), 523–528. Solid and Physical Modeling 2012.
  • Carr et al. (2001) Jonathan C Carr, Richard K Beatson, Jon B Cherrie, Tim J Mitchell, W Richard Fright, Bruce C McCallum, and Tim R Evans. 2001. Reconstruction and representation of 3D objects with radial basis functions. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques. 67–76.
  • Chen and Zhang (2019) Zhiqin Chen and Hao Zhang. 2019. Learning Implicit Fields for Generative Shape Modeling. In

    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    . 5932–5941.
  • Ferguson et al. (2021) Zachary Ferguson, Minchen Li, Teseo Schneider, Francisca Gil-Ureta, Timothy Langlois, Chenfanfu Jiang, Denis Zorin, Danny M. Kaufman, and Daniele Panozzo. 2021. Intersection-free Rigid Body Dynamics. ACM Trans. Graph. 40, 4, Article 183 (2021).
  • Guennebaud et al. (2010) Gaël Guennebaud, Benoît Jacob, et al. 2010. Eigen v3.
  • Gurumoorthy and Rangarajan (2009) Karthik S. Gurumoorthy and Anand Rangarajan. 2009. A SchröDinger Equation for the Fast Computation of Approximate Euclidean Distance Functions. In Proceedings of the Second International Conference on Scale Space and Variational Methods in Computer Vision (SSVM ’09). Springer-Verlag, Berlin, Heidelberg, 100–111.
  • Hadigheh et al. (2007) Alireza Ghaffari Hadigheh, Oleksandr Romanko, and Tamás Terlaky. 2007. Sensitivity analysis in convex quadratic optimization: simultaneous perturbation of the objective and right-hand-side vectors. Algorithmic Operations Research 2, 2 (2007), 94–94.
  • Hapala et al. (2011) Michal Hapala, Tomáš Davidovič, Ingo Wald, Vlastimil Havran, and Philipp Slusallek. 2011. Efficient stack-less BVH traversal for ray tracing. In Proceedings of the 27th Spring Conference on Computer Graphics. 7–12.
  • Hart (1996) John C Hart. 1996. Sphere tracing: A geometric method for the antialiased ray tracing of implicit surfaces. The Visual Computer 12, 10 (1996), 527–545.
  • Jacobson et al. (2018) Alec Jacobson, Daniele Panozzo, et al. 2018. libigl: A simple C++ geometry processing library.
  • Kahan (2004) William Kahan. 2004. On the cost of floating-point computation without extra-precise arithmetic. World-Wide Web document (2004), 21.
  • Kalra and Barr (1989) D. Kalra and A. H. Barr. 1989. Guaranteed Ray Intersections with Implicit Surfaces. In Proceedings of the 16th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’89). Association for Computing Machinery, New York, NY, USA, 297–306.
  • Kazhdan et al. (2006) Michael Kazhdan, Matthew Bolitho, and Hugues Hoppe. 2006. Poisson surface reconstruction. In Proceedings of the fourth Eurographics symposium on Geometry processing, Vol. 7.
  • Kazhdan and Hoppe (2013) Michael Kazhdan and Hugues Hoppe. 2013. Screened poisson surface reconstruction. ACM Transactions on Graphics (ToG) 32, 3 (2013), 1–13.
  • Kreisselmeier and Steinhauser (1979) G. Kreisselmeier and R. Steinhauser. 1979. Systematic Control Design by Optimizing a Vector Performance Index. IFAC Proceedings Volumes 12, 7 (1979), 113–117. IFAC Symposium on computer Aided Design of Control Systems, Zurich, Switzerland, 29-31 August.
  • Lan et al. (2021) Lei Lan, Yin Yang, Danny M. Kaufman, Junfeng Yao, Minchen Li, and Chenfanfu Jiang. 2021. Medial IPC: Accelerated Incremental Potential Contact With Medial Elastics. ACM Trans. Graph. (2021).
  • Li et al. (2020) Minchen Li, Zachary Ferguson, Teseo Schneider, Timothy Langlois, Denis Zorin, Daniele Panozzo, Chenfanfu Jiang, and Danny M. Kaufman. 2020. Incremental Potential Contact: Intersection- and Inversion-free Large Deformation Dynamics. ACM Transactions on Graphics 39, 4 (2020).
  • Li et al. (2021) Minchen Li, Danny M. Kaufman, and Chenfanfu Jiang. 2021. Codimensional Incremental Potential Contact. ACM Trans. Graph. 40, 4, Article 170 (2021).
  • Macklin et al. (2020) Miles Macklin, Kenny Erleben, Matthias Müller, Nuttapong Chentanez, Stefan Jeschke, and Zach Corse. 2020. Local Optimization for Robust Signed Distance Field Collision. Proc. ACM Comput. Graph. Interact. Tech. 3, 1, Article 8 (April 2020), 17 pages.
  • MathWorks Inc. (2021) The MathWorks Inc. 2021. MATLAB.
  • McAdams et al. (2011) Aleka McAdams, Yongning Zhu, Andrew Selle, Mark Empey, Rasmus Tamstorf, Joseph Teran, and Eftychios Sifakis. 2011. Efficient Elasticity for Character Skinning with Contact and Collisions. ACM Trans. Graph. 30, 4, Article 37 (July 2011), 12 pages.
  • Mescheder et al. (2019) Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. 2019. Occupancy Networks: Learning 3D Reconstruction in Function Space. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR).
  • Mitchell et al. (2015) Nathan Mitchell, Mridul Aanjaneya, Rajsekhar Setaluri, and Eftychios Sifakis. 2015. Non-Manifold Level Sets: A Multivalued Implicit Surface Representation with Applications to Self-Collision Processing. ACM Trans. Graph. 34, 6, Article 247 (Oct. 2015), 9 pages.
  • Panetta et al. (2017) Julian Panetta, Abtin Rahimian, and Denis Zorin. 2017. Worst-Case Stress Relief for Microstructures. ACM Trans. Graph. 36, 4, Article 122 (July 2017), 16 pages.
  • Park et al. (2019) Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. 2019. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • Peng et al. (2004) Jianbo Peng, Daniel Kristjansson, and Denis Zorin. 2004. Interactive Modeling of Topologically Complex Geometric Detail. ACM Trans. Graph. 23, 3 (Aug. 2004), 635–643.
  • Ragan-Kelley et al. (2013) Jonathan Ragan-Kelley, Connelly Barnes, Andrew Adams, Sylvain Paris, Frédo Durand, and Saman Amarasinghe. 2013. Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines. Acm Sigplan Notices 48, 6 (2013), 519–530.
  • Sethi et al. (2012) Manu Sethi, Anand Rangarajan, and Karthik Gurumoorthy. 2012. The Schrödinger distance transform (SDT) for point-sets and curves. In 2012 IEEE Conference on Computer Vision and Pattern Recognition. 198–205.
  • Sloan et al. (2001) Peter-Pike Sloan, William Martin, Amy Gooch, and Bruce Gooch. 2001. The Lit Sphere: A Model for Capturing NPR Shading from Art. In Proceedings of the Graphics Interface 2001 Conference, June 7-9 2001, Ottawa, Ontario, Canada. 143–150.
  • Smith and Schaefer (2015) Jason Smith and Scott Schaefer. 2015. Bijective Parameterization with Free Boundaries. ACM Trans. Graph. 34, 4 (2015).
  • Stein et al. (2018) Oded Stein, Eitan Grinspun, Max Wardetzky, and Alec Jacobson. 2018. Natural Boundary Conditions for Smoothing in Geometry Processing. ACM Trans. Graph. 37, 2, Article 23 (May 2018), 13 pages.
  • Wyvill et al. (1999) Brian Wyvill, Andrew Guy, and Eric Galin. 1999. Extending the CSG Tree. Warping, Blending and Boolean Operations in an Implicit Surface Modeling System. Computer Graphics Forum 18, 2 (1999), 149–158. arXiv:
  • Zhang et al. (2020) Aston Zhang, Zachary C. Lipton, Mu Li, and Alexander J. Smola. 2020. Dive into Deep Learning.
  • Zhao et al. (2001) Hong-Kai Zhao, Stanley Osher, and Ronald Fedkiw. 2001. Fast surface reconstruction using the level set method. In Proceedings IEEE Workshop on Variational and Level Set Methods in Computer Vision. IEEE, 194–201.
  • Zhou and Jacobson (2016) Qingnan Zhou and Alec Jacobson. 2016. Thingi10K: A Dataset of 10,000 3D-Printing Models. arXiv preprint arXiv:1605.04797 (2016).

Appendix A Proof of Underestimate Property

Here we prove that smooth distances underestimate the true distance, when each distance contribution has an associated weight function.

Theorem A.1 ().

Let be the data mesh, and let be a query primitive, and each has an associated weight function , where for some constant . Suppose . Then, defining as in Eq. 3 (but using a general query primitive instead of a point), we have for all .


Let . Using this notation, . Then,


Therefore, we have . ∎

Appendix B Exact Distance Formulation

Here we discuss our distance formulation between simplices and . Denoting barycentric coordinates of as and as , the points on each simplex referenced by the barycentric coordinates are denoted as and respectively. (For points, the barycentric coordinate vector is 0-dimensional, so the barycentric coordinate vector can be omitted.) For example, if is a triangle consisting of 3 vertices , and , . Furthermore, we are restricted to convex combinations of (i.e., , and ). Since any point on a simplex can be referenced using barycentric coordinates, we can determine the closest point pair on and by minimizing the squared distance between all points on each simplex:


where and are the argmin of the problem. Combined with the aforementioned constraints on and , we have a quadratic program with linear constraints (and in the case of points, the problem simplifies to the norm).

We are also interested in taking derivatives of this distance. It is well-known that quadratic programs are differentiable (Hadigheh et al., 2007), and in this case, the distance gradient (with respect to ) is equivalent to the distance gradient of the closest point pair and with respect to , which is


Appendix C Weight Function Case Analysis

Here we derive the desired weights in various cases depending on the location of the closest point on a data mesh to point . First, suppose the closest point is on a vertex . Looking at , and assuming is large, we have

In the above derivation, denotes the set of one-ring neighbours of , and the second line is a consequence of high making the other terms in the summation negligible. Our goal is to ensure that , which is equivalent to the condition . We can accomplish this by setting if ’s closest point to is its vertex (which will be the case for all when ’s closest point on is ). When is a triangle mesh and the closest point is along an edge , we can use an identical argument to derive where denotes the set of triangles incident on . Of course, these weights do not satisfy the property , so these conditions actually apply to an unscaled weight function (see Section 3.3).

Now suppose the closest point is on a simplex (but not on one of its faces, in which case the earlier disussion applies). Again assuming is large, we have

In this case, we can simply set .

Appendix D Far Field Approximation

Here we describe the Barnes-Hut far field approximation in more detail. Let be a bounding region of points, be the number of points in , be the diameter of (e.g., the bounding box diagonal), and be the center of the far field approximation of . If for some user-defined , then approximate the exponential summation over points as . Similarly, the gradient contribution of (in the numerator summation of Eq. 4) is now . When our data primitives are edges or triangles, we must take some care with the weight function, which must be incorporated to ensure the approximation is reasonably smooth. Since the approximation is just a single point, there is no undesirable concentration in some regions and we can simply use a weight of , which corresponds to a constant weight function scaled by and attenuated by .

Viewing the far field approximation as a Taylor series, we have only included the constant term. However, we found that higher-order Taylor series terms tended to produce worse results due to the off-center expansion point amplifying the error of those terms.

Appendix E GPU Benchmark

Figure 18. Results of the Thingi10K benchmark run on the GPU, with the same test as in Fig. 13. The top row shows times and average leaves visited, while the bottom row shows speedups relative to the CPU benchmark. The dotted lines represent a speedup of 1 (i.e., identical performance on the CPU and GPU).

Here we present the results of our GPU benchmark. Structurally, it is identical to the benchmark in Section 4.4, but it is run on a GPU instead of several CPU threads. We ran these tests on a Titan RTX, and implemented our method in CUDA by ensuring our CPU code was also GPU-compatible (though. The results are presented in Fig. 18. We see that, while performance is still linearly proportional to leaves visited, and the GPU code consistently outperforms CPU code by staying over the speedup=1 line and achieving an average speedup on points and speedup on edges and triangles, the relative speedups vary greatly, and are significantly reduced as more leaves are visited. This is likely because global memory accesses become a significant issue in these cases, on top of the additional computation. This is an especially pronounced issue on GPUs since memory is much slower to access on a GPU than a CPU.