1. Introduction
Distance fields are a fundamental tool in computer graphics. They find use in a wide range of applications, including raymarching and raytracing algorithms, as well as collision detection and resolution schemes. Derivatives of distance fields yield important geometric information, such as normal and curvature data, that can be used for rendering effects or to improve the convergence of optimization schemes. Finally, distance fields abstract away the underlying geometric representation, theoretically enabling algorithms to function on codimensional data.
Unfortunately, evaluating and storing an exact distance field is expensive, both in terms of memory and computation. This has motivated a plethora of work on computing, storing and approximating distance fields. From these indepth explorations and from the properties of the exact distance field, we can construct a picture of the ideal distance field representation. First, any approximate distance field should be conservative with respect to the original field. Overestimating the distance between objects allows interpenetration which leads to, at best, artifacts during rendering and, at worst, catastrophic failure during physics simulations. Second, in rendering, animation and simulation, we often require the derivatives of the distance field. Any practical approximation of a distance field should provide an approximation of these differential quantities as well. Third, geometric data comes in many different forms and an approximate distance field should be applicable to data represented not just as triangles, but as points and line segments as well. Finally, a good distance field approximation should be efficient to evaluate and store. In this paper we present a distance field approximation that satisfies all of the above criteria.
Our approach builds off of smooth minimum functions, which have previously been used in geometry processing, computational fabrication and simulation. Specifically, we start with the well known LogSumExp (KreisselmeierSteinhauser) smooth minimum function. The LogSumExp function is guaranteed to provide a conservative bound on the true distance and is differentiable, which satisfies our first two criteria. However, smooth distance functions are not a panacea, as applying this formulation naively to codimensional data leads to unacceptable swelling artifacts, a problem for which we provide an efficient method of amelioration. Further we demonstrate that common procedures for accelerating the evaluation of such functions via fast summation break the conservative bounds. Based on this observation we propose a modified BarnesHut procedure which both accelerates evaluation of smooth distances while preserving the important conservative property. Taken together the sum of these contributions is the first distance function approximation that is conservative, differentiable, codimensional and efficient to compute. We evaluate the efficiency of our method on Thingi10K, a large scale geometry database, and demonstrate its efficacy via a number of codimensional rendering and simulation examples.
2. Related Work
The LogSumExp function is commonly used in deep learning (see, e.g.,
(Zhang et al., 2020)) as a smooth estimate of the maximum of a set of data. Its gradient is the softmax function (which is not a maximum as the name implies, but a smooth estimate of the onehot argmax of a set of data). LogSumExps can be easily modified to return a smooth minimum distance rather than a maximum (and its gradient is the softmin). Aside from deep learning, LogSumExps also appear in other contexts where smooth approximations to min/max functions are needed: for example, they are known in the engineering literature as the KreisselmeierSteinhauser distance
(Kreisselmeier and Steinhauser, 1979). LogSumExps have also been used recently in computer graphics by Panetta et al. (2017) to smoothly blend between microstructure joints. LogSumExp is just one of a number of smooth distance functions. For instance, the norm function has been used as a smooth distance as well, for smooth blending between implicit surfaces (Wyvill et al., 1999) and computing smooth shells around surfaces (Peng et al., 2004). A similar function can be computed directly from boundary integrals (Belyaev et al., 2013). These functions could act as a drop in replacement for much of our proposed algorithm, but the former function tends to return numerically 0 results for faraway points which makes the zero isosurface ambiguous, and it is unclear if the latter even exhibits the important underestimate property. Not only do LogSumExps satisfy the desired property, but they numerically return inf for faraway points which leaves the zero isosurface unambiguous, and so we choose to construct our method around the LogSumExp function.Gurumoorthy and Rangarajan (2009)
demonstrated a relationship between the Eikonal equation and the timeindependent Schrödinger equation (which is a screened Poisson equation), and used this to derive the LogSumExp function as an approximate solution to the Eikonal equation. They evaluate the LogSumExp using a convolution in the frequency domain, which requires a background grid to compute the FFT and its inverse. Further, their method requires all data points to be snapped to grid vertices. While more efficient than a full evaluation, our method achieves comparable asymptotic performance without a background grid and therefore respects the input geometry.
Sethi et al. (2012) extended this line of work by adding support for edges, but they integrate the exponentiated distance over each edge, which, as we show in Section 3.1, can lead to overestimated distances.Smooth signed distance functions (SDFs) have recently become popular in machine learning as well. A full accounting is beyond the scope of this paper but see for instance
Chen and Zhang (2019); Park et al. (2019); Mescheder et al. (2019). These approaches encode geometric information in latent vectors to be used at evaluation time, along with an input position in space to evaluate a learned signed distance function. Aside from being unsigned rather than signed, our distance approximation diverges in two important ways from work on Neural SDFs. First, our representation is an augmentation to the exact geometry as a smooth approximation rather than an outright replacement. This means that algorithms that still require the original discrete geometry
(Li et al., 2020)for operations such as continuous collision detection can make use of our method. Second, the only preprocessing in our method is building a BVH over the data (which typically takes less than a second) rather than training a neural network.
One particular feature of smooth distance functions is that they can use point data to construct implicit functions whose zero isosurface represents the surface. There are many other methods that accomplish this, such as radial basis functions
(Carr et al., 2001) and Poisson surface reconstruction (Kazhdan et al., 2006; Kazhdan and Hoppe, 2013). Although these methods are capable of producing very accurate surface reconstructions, they require solving large linear systems, while in our approach the implicit function is readily available. Another surface reconstruction method is the point set surface (Alexa et al., 2003), though instead of obtaining an implicit function, this method finds points on the described surface. Level set methods (Zhao et al., 2001) have also been used to reconstruct surfaces, though this requires a grid and may require prohibitively dense voxels to capture fine detail in the underlying surface.Collision resolution has long been a difficult problem in physicsbased animation and engineering. While no efficient method for deformable SDFs exists, a useful approximation is to use a signed distance function of the undeformed space (McAdams et al., 2011). Mitchell et al. (2015) extended this work by using a nonmanifold grid to accurately represent highfrequency and even zerowidth features. Recently, Macklin et al. (2020) have used SDFs to represent rigid objects that robustly collide with deformable objects. Further, barrier energies for constrained optimization can be seen as a smooth analog for distancebased constraints. These have seen much success in geometry processing (Smith and Schaefer, 2015) and physics simulation (Li et al., 2020; Ferguson et al., 2021). Previously these approaches were all limited to acting on input triangle mesh geometry. Concurrent work proposes separate extensions to codimensional (Li et al., 2021) and medial geometry (Lan et al., 2021). While effective for this particular application, these methods lack the ability to smooth input data (Ferguson et al., 2021) and so cannot, for instance, approximate pointclouds as closed surfaces (Fig. 1) for smooth collision resolution. They also rely on lowlevel machinery like interval arithmetic to provide conservative distance estimates. Our approach is constructed from the ground up to provide a conservative distance estimate and, because it approximates the true distance function, it is suitable for applications outside of just collision detection.
The key to our method is a carefully designed weighted smooth distance function, combined with a specialized BarnesHut approximation (Barnes and Hut, 1986). The BarnesHut algorithm was first developed for Nbody simulations to reduce computational effort on faraway bodies with negligible contributions to force. BarnesHut approximations have seen use in many graphics applications which use rapidly decaying kernels (e.g., (Barill et al., 2018; Alexa et al., 2003)), but as we will show, careful modification is needed to ensure that fast evaluation does not break the conservative bounds of the LogSumExp function.
Contributions
In this paper we present a new method for computing smooth distance fields on codimensional input geometry. We derive blending weights and a modified BarnesHut acceleration approach that ensures our method is conservative (points outside the zero isosurface are guaranteed to be outside the surface), accurate, and efficient to evaluate for all the above data types. This, in combination with its ability to smooth sparsely sampled data, like point clouds, enhances typical graphics tasks such as sphere tracing and enables new applications such as direct, codimensional rigid body simulation using unprocessed lidar data.
3. Method
Given an input geometry embedded in , the unsigned distance field is defined as
(1) 
where is a point on the input geometry and is an arbitrary query point.
In our case, the input geometry is represented as a codimensional data mesh, . Here is a set of vertices in , and is a set of simplices (we deal with points, edges, and triangles). A simplex consists of a tuple of indices into . For example, a triangle would be represented as where each indexes . Edges can be constructed from triangle indices using, e.g., . Lastly, we denote the dimension of a simplex as and the volume of a simplex as (which is 1 for points). can be omitted for point clouds, but we will use it throughout to keep the notation consistent.
With respect to this discretized input, the distance field computation can be reframed as finding the minimum distance between the query point and all constituent simplicies of :
(2) 
However, is only , and has discontinuities along the medial axes of the data mesh. Furthermore, computing all pairwise primitive distances has complexity in the number of data primitives, so it is impractical for large meshes. Our goal is to tackle both problems by designing a distance function that is differentiable and can be efficiently approximated.
We begin by converting the true distance to a smooth distance via an application of the LogSumExp smooth minimum function which yields
(3) 
where , controls the accuracy and smoothness of the approximation and are influence weights for each simplex (which will be discussed in more detail in Section 3.3).
Importantly, this function is differentiable for all finite values of with gradient (with respect to )
(4) 
and is guaranteed to underestimate the true distance to the input mesh (Appendix A). Intuitively this means that our smooth distance will alert us to crossing the input surface before it happens. This conservative property is crucial for applications such as rendering or collision detection since it ensures that maintaining separation with regards to the smooth distance is sufficient to maintain separation between input shapes (Fig. 2). Ergo, the output of our method will remain usable if downstream applications require the underlying geoemtric representations to be separated.
3.1. Smooth Distance to a Single Query Point
As a didactic example let’s apply Eq. 3 to a point cloud which we do by setting and . We can directly observe the smoothing effect of (Fig. 3), which can be used to close point clouds. Decreasing alpha produces progressively smoother surface approximations, and surfaces produced with smaller alpha values nest those produced with higher alphas. This nesting is a consequence of the conservative behaviour of the LogSumExp formula.
An interesting property of is that it forms a small negative band around the underlying surface, which gets smaller as increases. Essentially, represents the implicit function of a thin shell around . This property is useful for constraints, since small violations of the constraint will result in a sign change, which a purely unsigned function cannot do and would necessitate a root finding procedure to detect intersections.
All of this taken together means that the naive LogSumExp works well for point cloud geometry.
3.2. Smooth Distances to CoDimensional Geometry
A natural extension of Eq. 3 to edge and triangle meshes is to replace the discrete sum over points with a continuous integral (see, e.g., (Sethi et al., 2012)) over the surface:
When discretized, this becomes equivalent to applying Eq. 3 to quadrature points on the mesh, while the become the quadrature weights.
While simple, this formulation will unfortunately break the important conservative property of the LogSumExp function because the quadrature weights will, in general, not be greater than or equal to 1 (Appendix A). Fig. 4 shows examples of the overestimation errors introduced by this approach.
Rather we must return to Eq. 3 and compute the respective ’s to the constituent mesh triangles and edges exactly. This can be accomplished via efficient quadratic programming which we detail in Appendix B. However, using unit valued weights, as we did for points, produces bulging artifacts where primitives connect (Fig. 5). What remains is to compute weight functions that mitigate these effects while simultaneously satisfying our constraint.
3.3. Weight Functions
Panetta et al. (2017) observe the same concentration artifacts we do when using LogSumExp on edge meshes. They propose fixing it using weighted blending, but their weight calculation relies on knowing the convex hull of the edge mesh neighborhood and blending between smooth and exact distance fields. Unfortunately, it is unclear how to extend this to triangles or apply acceleration schemes like BarnesHut. Rather, we propose a different weighting scheme which is fast, local and can be easily adapted to triangles.
We center the design for our perprimitive weight functions around highaccuracy use cases, which corresponds to high values of . The weights must be spatially varying to counteract the local concentration artifacts, and to simplify the function definition, we will define them as functions of the barycentric coordinates of the closest point on to , denoted as . We will proceed by analyzing different cases for the location of the closest point on to , and employ polynomial fitting using the identified fixed points.
First, suppose the closest point on to is at a vertex . We can show that weights inversely proportional to the valence of produce an accurate solution (see Appendix C). A similar argument applies when contains triangles and the closest point is on an edge (or in general, if the closest point is on a face of a simplex in ). When the closest point is on a simplex in (but not on a face), we can show that weights of 1 are sufficient (see Appendix C). This gives us 3 fixed points for edges and 7 fixed points for triangles.
Unfortunately, these weight values are less than 1 on the faces and thus are no longer conservative. We rectify this by first computing unscaled weight functions , and then scaling by a factor so that . Although this inflates the zero isosurface, it does so uniformly, so we do not reintroduce the artifacts we wanted to remove. This error is logarithmic in and becomes negligible at higher values of .
Unfortunately, we still have another, more subtle problem: gradients at simplex boundaries are discontinuous. We want these gradients to be 0 because, once the closest point to on is at the boundary, moving in a direction orthogonal to the boundary does not change the closest point, so the weight function should not change (Fig. 6). Before proceeding, we note that the gradient of is
(5) 
where is the matrix of linear shape function gradients from finite element analysis, and only depends on the weight function itself. In the case of edges, is a single scalar , so setting the normal derivatives to 0 simply involves at the endpoints of the edge. However, this becomes difficult for triangles, since only relatively highorder polynomials can support this property and those tend to exhibit undesirable oscillatory behaviour. Another approach to solve this problem would be to minimize the Laplacian energy subject to the Dirichlet conditions described before (zero Neumann conditions are the natural boundary conditions of this energy (Stein et al., 2018)), and solve using the finite element method. Unfortunately, this requires a dense mesh for each triangle in , and since we must use a small number of variables to avoid prohibitive memory pressure, finite elements are infeasible for our use case. Even using the boundary element method does not solve this issue since we would require a volumetric mesh to set Dirichlet conditions on the triangle interior. Thus, we simply use loworder polynomials (Fig 7) for triangles without any derivative constraints, and instead aim to tame normal derivatives by reducing the magnitude of .
For triangles, our barycentric coordinate vector is , and our gradients are and where represents a 90degree counterclockwise rotation of . We see that gradients are inversely proportional to triangle area, and are badly behaved for smaller triangles (Fig. 8). To control these gradients, we notice that ’s use in can be interpreted as a unifom scale factor in space, and is equivalent to measuring lengths with the metric where
is the identity matrix. This interpretation does not change
, but it does change the barycentric gradients (e.g., we now have ), which gives us direct control over the problematic gradient term (Fig. 8). We also use this gradient scaling for edges in our implementation, though it is not necessary.To summarize our design so far, we now have 5 constraints for edge weights (3 value + 2 derivative) and 7 constraints for triangle weights (value constraints only). We have enough constraints to fit a univariate quartic polynomial for edges, and if we duplicate the value constraints on triangle edges, we end up with 10 constraints which is enough to fit a bivariate cubic polynomial. To make the functions wellshaped and to retain our assumptions about high accuracy, we try to spread out these constraints when applicable — for example, constraints are placed at edge and triangle barycenters for edge and triangle weights respectively, and the two triangle edge constraints are evenly spaced along the edges. Combined with the barycentric gradient scaling method, we now have wellbehaved weight functions for high .
A key assumption in our design of is that is sufficiently large, but since is a controllable user parameter, this assumption can sometimes be violated, and the weights can overcompensate for concentration. In these cases needs to be further modified to avoid artifacts (Fig. 9). If surpasses some threshold (which at a high level represents a upper bound on — see Section 4.3 for more information on how it is selected), then we can use asis; if , then we must flatten or attenuate . Experimentally we observe that using a scale factor to define an attenuated weight function , with gradient
, helps minimize these issues. Note that this heuristic does not eliminate the issue for all
but does mitigate it sufficiently for all ranges of used in this paper.3.4. Generalized Query Primitives
We will briefly discuss how to generalize our query point into a general query primitive , which can also be an edge or a triangle. Just like with points, we can compute the distance to another simplex as using a positive semidefinite quadratic program with linear constraints (see Appendix B). We now also obtain barycentric coordinates for through the argmin, which we will denote as . All formulas in the preceding subsections can be adapted for this generalization by simply replacing with , and using the closest point to on , , when a single point is needed (e.g., in barycentric coordinate gradients). These changes imply that is now with respect to an entire query primitive, but this simply describes a rigid translation of .
3.5. BarnesHut Approximation
Although our distance function is smooth and easy to differentiate, it requires evaluation of a distance between every pair of primitives in the most general case. However, its use of exponentially decaying functions enables the application of the wellknown BarnesHut approximation (Barnes and Hut, 1986) to accelerate evalaution. BarnesHut uses a far field approximation to cluster groups of faraway primitives together (typically using a spatial subdivision data structure like an octree or bounding volume hierarchy) and treat them as a single point. The approximation is characterized by the centers of each bounding region, and a user parameter which controls where the far field approximation is employed — see Appendix D for details. At a high level, lower is more accurate, and results in an exact evaluation.
Placing the farfield expansion at the center of mass reduces overall error (Barnes and Hut, 1986) but it can possibly overestimate the true distance, breaking the conservative nature of the LogSumExp smooth distance. A minor modification can reinstate this property – placing the expansion center on the closest point of the cluster region to the query primitive , rather than at the center of mass. When using an octree or BVH, we pick the closest point on the bounding box to the query point. This may increase the error relative to using the center of mass, but it guarantees that the BarnesHut estimate only underestimates the exact smooth distance (Fig. 10). Note that this choice does not affect the gradient if the bounding region is convex (which is the case for bounding box hierarchies and octrees), since rigidly translating farther away along the gradient direction does not change the closest point.
Finding the closest point on a box to a query point is simple, but finding the closest point on a box to a query edge or triangle is more complex, and requires solving a quadratic program. However, this is relatively expensive for what is meant to be a fast check to help reduce computation time, so we instead use an approximation of the problem. Viewing a box as the intersection of 6 halfspaces, we identify which halfspaces the primitive lies completely outside of. If there are 3 such halfspaces, they identify a corner of the box that is closest to the primitive; if there are 2 halfspaces, they identify an edge of the box; and if there is 1 halfspace, it identifies a face of the box. If there are no halfspaces that satisfy this criteria, then we simply return the distance to the box as 0 and force the traversal to visit its children. Although this test makes the BarnesHut approximation somewhat more conservative, the computational savings more than make up for it — we noticed over a improvement in performance in our experiments.
3.6. Smooth Distance to a Query Mesh
Now that we are equipped with an efficient method for computing , we can combine these distances using LogSumExp to obtain a smooth distance between and a query mesh :
(6) 
where we have introduced a new accuracycontrolling parameter that is indepenent of the inner . In an exact evaluation (), this strongly resembles the LogSumExp of all the pairwise distances between each and .
A subtle issue with the current formulation is that the distance gradients are with respect to
as a whole, but we need pervertex gradients as those are the true degrees of freedom. One simple way to do this is to split
equally between the vertices of , and rewriting the summation over vertices and onering neighbourhoods gives us:(7) 
where is the set of onering neighbours of . Then, the gradient with respect to query vertex is
(8) 
Once again, the convexity of the query and data primitives means that we do not need to modify , since the direction of steepest descent is the same for all vertices.
The summation in Eq. 6 can be easily parallelized, and the gradient computation requires only a small amount of serialization at the end to redistribute gradients onto vertices.
4. Results
4.1. Implementation
We implemented our method on using C++ with Eigen (Guennebaud et al., 2010) and libigl (Jacobson et al., 2018). We designed the implementation so that it could be ported onto the GPU, and to this end, we implemented the BVH traversal algorithm outlined in Algorithm 1 using the stackless method of Hapala et al. (2011). Since GPUs exhibit poor performance for doubleprecision floating point, most of our computations (particularly vector arithmetic) are conducted in singleprecision, while exponential sums are tracked using doubleprecision to increase the range of usable values. Primitive distances were hand coded if feasible, and were otherwise implemented using a nullspace quadratic program solver written using Eigen (edgetriangle and triangletriange distances). An important aspect of these distance computations is robustness, which becomes particularly important because distances are computed using singleprecision floats, and errors in the distance can result in breaking the conservative property. The handcoded distance functions made extensive use of an algorithm by Kahan which employs the fusedmultiplyadd instruction to reduce cancellation error (Kahan, 2004), while the quadratic program solver leveraged Eigen’s numerically stable algorithms.
4.2. Sphere Tracing
Sphere tracing is a method to render signed (and unsigned) distance functions (Hart, 1996). We can use sphere tracing to visualize the zero isosurface of with as a single point, which we have done throughout the paper. Since our function only approaches an exact distance function in the limit, it does not exactly satisfy the properties required for sphere tracing. In particular, although the function underestimates the true distance, its value is not a lower bound on the distance to its zero isosurface, which is quite different from the underlying surface at low . We find that this is not a big issue in practice, and the rendered isosurfaces are of high quality without a more complex root finding procedure such as the one proposed by Kalra and Barr (1989). In cases where sphere tracing enters the negative band, we can raymarch backwards to find the zero isosurface, which in our experience has not introduced a significant performance penalty.
Due to the expoentials used in the formulation, a high can result in numerical issues far away from the surface, where becomes inf. As a result, it can be difficult to sphere trace a high surface. We circumvent this issue by using a falloff technique, by reducing by a factor of 10 and reevaluating until the result is noninf.
Since we obtain surface normals through distance gradients, we can also use these normals to render surfaces using material capture, or MatCaps (Sloan et al., 2001). A showcase of a variety of renders is shown in Fig. 11, and a summary of performance statistics is given in Table 1. The sphere trace results were run on a 2015 MacBook Pro using 4 cores. We notice that point renders tend to be much faster than the other two types, even on much larger geometry — we expect this to be a consequence of the BVH structure. Since point BVHs will never have overlapping subtrees, BarnesHut is better able to employ the far field approximation on more nodes than the other two. These results are confirmed in Fig. 13.
Name  Type  Time (s)  
Hex Torus  Edges  72  2  7.25626 
Net  Edges  156  80  3.42445 
Bowl  Edges  736  50  15.9481 
Torus  Tris  1152  300  6.05019 
Bunny  Tris  6966  1200  16.9601 
Bucky Ball  Tris  12548  1500  84.3932 
Deer  Points  165277  1  7.38617 
Deer  Points  165277  2  8.64774 
Deer  Points  165277  4  11.962 
Lidar  Points  6591087  100  11.6792 
4.3. Parameter Analysis
To demonstrate the effectiveness of BarnesHut, we conduct an ablation study on . The approximation becomes increasingly inaccurate farther away from the surface, but since we are only concerned with the zero isosurface, we use sphere tracing as a sampling technique. Using the Stanford bunny mesh’s vertices, edges, and faces, with , , and between 0 and 1, we measure the amount of time taken to render a image, as well as the distance along each ray between the approximation’s estimate of the isosurface and the actual isosurface (). The results are shown in Fig. 12, using 4 threads on a 2015 MacBook Pro. We see that running time decreases by an order of magnitude even for in all three cases, and all renders take less than 10s at . The error relative to the bounding box diagonal also remains below 4% for these values of . As a result, we use in all of our examples in the paper unless otherwise stated. We find that can be increased with low error for higher values of (or equivalently, meshes with lower sampling density), but this is not necessary to obtain significant speedups.
The remaining parameters and are associated with the geometry of . In cases where we want a function resembling our underlying geometry, we want to select an that is high enough to produce a good approximation of the surface, but low enough to prevent numerical problems. Since we can interpret as a metric, we can select based on the density of our geometry. One heuristic we found to be a useful starting point for edge and triangle meshes is to set to be the reciprocal of the minimum edge length. For points, we set to be 100 times the length of the bounding box diagonal. These results can be refined by sphere tracing the zero isosurface and tweaking the results until the surface looks satisfactory. is essentially an upper bound on and can be determined using this same procedure, which allows it to be used in low scenarios as well. is very similar to but is related to the geometry of , and can be selected using the same process. A fully automated solution for selecting these parameters is left as future work.
4.4. Performance Benchmark
We performed a largescale performance evaluation of our method on the Thingi10K dataset (Zhou and Jacobson, 2016). For each model, we created 3 data meshes: all of the model’s vertices , all of its edges , and all of its triangles . We then scaled and translated each of these meshes so that their bounding boxes were centered at , with a bounding box diagonal of 0.5. Then, we evaluated from each mesh to each of the voxel centers of a grid between and , in parallel using 16 threads. The results are reported in Fig. 13. We can see that performance is proportional to the number of visited leaves (i.e., the number of primitive distance calculations and far field expansions employed), and the percentage of visited leaves drops significantly as meshes increase in size. These results show that our method is quite scalable, and is very good at handling large meshes. Just like in Section 4.2
, points tend to perform much better than edges and triangles, but now we can see why this is the case. Point clouds have lower variances than edge meshes and triangle meshes in their average leaves visited percentage — for example, even at at 1000 edges/triangles, several meshes require the traversal to visit well over 10% of the leaves in their BVH on average. Also, points generally visit far fewer leaves — point cloud tests visit at most 70 leaves, while edge meshes and triangle meshes can require over 2000 visited leaves. As we can see from the first graph, less leaves visited corresponds to faster query times, and so points are empirically faster than the other two types of meshes.
We also ran this benchmark on a GPU — see Appendix E for the results.
4.5. Rigid Body Collisions
One popular application of distance functions is in collision resolution, where they can be used as an intersectionfree optimization constraint (Macklin et al., 2020; Li et al., 2020). While past work has used custom solvers to compute distances and find intersectionfree trajectories, we show that our function is sufficiently wellbehaved to work well in a generic solver. To focus on the distance function constraint rather than the mechanics, we simulate frictionless rigid body contact, using a variety of configurations and a variety of geometry. The solver is powered by Matlab (MathWorks Inc., 2021), where we use its fmincon solver with the default convergence criteria. The rigid body integrators are implemented in pure Matlab code, while the distance function and (per query mesh vertex) gradients are implemented as a MEX plugin, which is written in C++ and runs on 16 CPU threads, and is called from Matlab. As mentioned in Section 3.1, the negative band of enables us to use our constraint without the need for root finding techniques like continuous collision detection, as Li et al. (2020) needs to do for their purely unsigned distances. The cost is that our time steps are fairly small, but it simplifies the demonstrations.
First, we show two rigid bunnies colliding together and falling onto a bumpy ground plane as a simple test (Fig. 14). The two bunnies bounce off each other and fall down, without intersections. Note that we have 3 separate distance constraints (bunnybunny and two bunnyplane constraints), which we combine with yet another unweighted LogSumExp into a single constraint, using the maximum associated with the three meshes. Now we depart from traditional examples and simulate codimensional geometry. We can simulate a faceted icosphere falling into a netlike bowl (Fig. 15), where it bounces off one of the wires, rolls in the bowl, and is then flung out. Both of these meshes are represented as edge meshes in the simulation, and we see again that there is no interpenetration. In a more complex scenario, we can mix primitive types in a mesh and still compute distances. We can take a sphere mesh, attach some spikes represented as edges to it, and then simulate this shape falling into the net bowl (Fig. 16). We see one of the spikes hit the lip of the bowl and how the spikes get caught in the holes.
We can go even farther and use our function to simulate cases that have not been welldefined in previous methods, like point cloud collisions. Although recent work has simulated contact with highly codimensional geometry (Li et al., 2021; Ferguson et al., 2021), the inflated isosurfaces provided by allow us to close the surface defined by the point cloud without a surface reconstruction preprocess. We demonstrate this by rolling an octocat triangle mesh (Fig. 1) and a point cloud bunny (Fig. 17) down point cloud terrain acquired from a lidar scanner.
We summarize the performance results of these simulations in Table 2. Since we use Matlab’s optimization solver, we only measure the average time taken to measure the distance between the two meshes in each table entry. Each evaluation was run with 16 CPU threads parallelizing over perprimitive distance computations. As a result, particularly large examples like lidaroctocat were quite expensive, since even with parallelization, there were many perprimitive distances per thread to compute.
Type  Type  Avg. Dist Time (ms)  dt (s)  

Bowl  Edges  736  50  Icosphere  Edges  120  50  2.4162  1/200 
Bowl  Edges  736  50  Spiky Sphere  Tris+Edges  966  50  86.329  1/400 
Lidar  Points  6591087  100  Bunny (large)  Points  3485  20  34.584  1/100 
Lidar  Points  6591087  100  Octocat  Tris  37878  1000  2232.9  1/100 
Bunny  Tris  6966  2000  Bunny  Tris  6966  2000  22.399  1/500 
Bumpy Plane  Tris  800  500  Bunny  Tris  6966  2000  530.44  1/500 
5. Limitations and Future Work
We have demonstrated a variety of applications of our method, but it is not a panacea. For example, our method tends to be overly conservative in contact resolution, and may need to be augmented with a more exact method to accurately handle tight contacts like a nut screwing into a bolt. Furthermore, it can be difficult to select parameters optimally. Although we can select satisfactorily, and related parameters require some extra care to select in a way that is numerically stable and yields highly accurate solutions. One possibility would be to analyze the distribution of mesh edge lengths to pick a reasonable that is robust to noise in the distribution. Furthermore, our weight functions, while effective, are designed heuristically, and it would be interesting to see if there is a more exact way to define them based on the underlying geometry as well as the connectivity. Another related direction of future work is analyzing the distribution of a point cloud to eliminate concentration artifacts caused by nonuniform point distributions. Although this is not a problem for point clouds that come from most lidar scanners, it is a useful property to ensure the generalizability of our method.
Another interesting avenue for future work is applying our method to deforming meshes in elastodynamics simulations. Due to ’s dependence on the underlying mesh, it would also need to change over time, so it would be interesting to treat as part of the simulation’s evolving state. We also believe that there are interesting applications in purely particlebased simulation methods like spherical particle hydrodynamics that have no background grid to store implicit functions, where our method could be dropped in to render the fluid surface.
The parallelizability of our method also makes it interesting to consider its integration in purely GPUbased applications. In particular, there are many possible locations for parallelization; for example, while our current implementation parallelizes the query primitive distance computations, an alternative approach could parallelize the traversals in each perquery primitive evaluation. We believe analyzing the tradeoffs of these sort of approaches in the style of Halide (RaganKelley et al., 2013) is a promising line of future work.
6. Conclusion
We have presented a smooth distance function that can be efficiently evaluated in such a way that it conservatively estimates the distance to the underlying geometry. Our function works on various types of geometry that can be encountered: points, line segments, and triangles, and utilizes weight functions to eliminate isosurface artifacts. We have benchmarked our method on the Thingi10K dataset and shown that it scales quite well for large geometry. It enables applications such as in rigid body contact, without the need for a special solver, and allows heterogeneous geometry to be simulated without any special handling. We believe this geometric abstraction is very powerful, and due to its basic preprocessing requirements (simply building a BVH), it can provide a lightweight yet versatile augmentation to the underlying geometry.
References
 (1)
 Alexa et al. (2003) Marc Alexa, Johannes Behr, Daniel CohenOr, Shachar Fleishman, David Levin, and Claudio T. Silva. 2003. Computing and rendering point set surfaces. IEEE Transactions on visualization and computer graphics 9, 1 (2003), 3–15.
 Barill et al. (2018) Gavin Barill, Neil G Dickson, Ryan Schmidt, David IW Levin, and Alec Jacobson. 2018. Fast winding numbers for soups and clouds. ACM Transactions on Graphics (TOG) 37, 4 (2018), 1–12.
 Barnes and Hut (1986) Josh Barnes and Piet Hut. 1986. A hierarchical O (N log N) forcecalculation algorithm. nature 324, 6096 (1986), 446–449.
 Belyaev et al. (2013) Alexander Belyaev, PierreAlain Fayolle, and Alexander Pasko. 2013. Signed Lpdistance fields. ComputerAided Design 45, 2 (2013), 523–528. https://doi.org/10.1016/j.cad.2012.10.035 Solid and Physical Modeling 2012.
 Carr et al. (2001) Jonathan C Carr, Richard K Beatson, Jon B Cherrie, Tim J Mitchell, W Richard Fright, Bruce C McCallum, and Tim R Evans. 2001. Reconstruction and representation of 3D objects with radial basis functions. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques. 67–76.

Chen and Zhang (2019)
Zhiqin Chen and Hao
Zhang. 2019.
Learning Implicit Fields for Generative Shape
Modeling. In
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
. 5932–5941. https://doi.org/10.1109/CVPR.2019.00609  Ferguson et al. (2021) Zachary Ferguson, Minchen Li, Teseo Schneider, Francisca GilUreta, Timothy Langlois, Chenfanfu Jiang, Denis Zorin, Danny M. Kaufman, and Daniele Panozzo. 2021. Intersectionfree Rigid Body Dynamics. ACM Trans. Graph. 40, 4, Article 183 (2021).
 Guennebaud et al. (2010) Gaël Guennebaud, Benoît Jacob, et al. 2010. Eigen v3. http://eigen.tuxfamily.org.
 Gurumoorthy and Rangarajan (2009) Karthik S. Gurumoorthy and Anand Rangarajan. 2009. A SchröDinger Equation for the Fast Computation of Approximate Euclidean Distance Functions. In Proceedings of the Second International Conference on Scale Space and Variational Methods in Computer Vision (SSVM ’09). SpringerVerlag, Berlin, Heidelberg, 100–111. https://doi.org/10.1007/9783642022562_9
 Hadigheh et al. (2007) Alireza Ghaffari Hadigheh, Oleksandr Romanko, and Tamás Terlaky. 2007. Sensitivity analysis in convex quadratic optimization: simultaneous perturbation of the objective and righthandside vectors. Algorithmic Operations Research 2, 2 (2007), 94–94.
 Hapala et al. (2011) Michal Hapala, Tomáš Davidovič, Ingo Wald, Vlastimil Havran, and Philipp Slusallek. 2011. Efficient stackless BVH traversal for ray tracing. In Proceedings of the 27th Spring Conference on Computer Graphics. 7–12.
 Hart (1996) John C Hart. 1996. Sphere tracing: A geometric method for the antialiased ray tracing of implicit surfaces. The Visual Computer 12, 10 (1996), 527–545.
 Jacobson et al. (2018) Alec Jacobson, Daniele Panozzo, et al. 2018. libigl: A simple C++ geometry processing library. https://libigl.github.io/.
 Kahan (2004) William Kahan. 2004. On the cost of floatingpoint computation without extraprecise arithmetic. WorldWide Web document (2004), 21.
 Kalra and Barr (1989) D. Kalra and A. H. Barr. 1989. Guaranteed Ray Intersections with Implicit Surfaces. In Proceedings of the 16th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’89). Association for Computing Machinery, New York, NY, USA, 297–306. https://doi.org/10.1145/74333.74364
 Kazhdan et al. (2006) Michael Kazhdan, Matthew Bolitho, and Hugues Hoppe. 2006. Poisson surface reconstruction. In Proceedings of the fourth Eurographics symposium on Geometry processing, Vol. 7.
 Kazhdan and Hoppe (2013) Michael Kazhdan and Hugues Hoppe. 2013. Screened poisson surface reconstruction. ACM Transactions on Graphics (ToG) 32, 3 (2013), 1–13.
 Kreisselmeier and Steinhauser (1979) G. Kreisselmeier and R. Steinhauser. 1979. Systematic Control Design by Optimizing a Vector Performance Index. IFAC Proceedings Volumes 12, 7 (1979), 113–117. https://doi.org/10.1016/S14746670(17)655848 IFAC Symposium on computer Aided Design of Control Systems, Zurich, Switzerland, 2931 August.
 Lan et al. (2021) Lei Lan, Yin Yang, Danny M. Kaufman, Junfeng Yao, Minchen Li, and Chenfanfu Jiang. 2021. Medial IPC: Accelerated Incremental Potential Contact With Medial Elastics. ACM Trans. Graph. (2021).
 Li et al. (2020) Minchen Li, Zachary Ferguson, Teseo Schneider, Timothy Langlois, Denis Zorin, Daniele Panozzo, Chenfanfu Jiang, and Danny M. Kaufman. 2020. Incremental Potential Contact: Intersection and Inversionfree Large Deformation Dynamics. ACM Transactions on Graphics 39, 4 (2020).
 Li et al. (2021) Minchen Li, Danny M. Kaufman, and Chenfanfu Jiang. 2021. Codimensional Incremental Potential Contact. ACM Trans. Graph. 40, 4, Article 170 (2021).
 Macklin et al. (2020) Miles Macklin, Kenny Erleben, Matthias Müller, Nuttapong Chentanez, Stefan Jeschke, and Zach Corse. 2020. Local Optimization for Robust Signed Distance Field Collision. Proc. ACM Comput. Graph. Interact. Tech. 3, 1, Article 8 (April 2020), 17 pages. https://doi.org/10.1145/3384538
 MathWorks Inc. (2021) The MathWorks Inc. 2021. MATLAB. https://www.mathworks.com/products/matlab.html
 McAdams et al. (2011) Aleka McAdams, Yongning Zhu, Andrew Selle, Mark Empey, Rasmus Tamstorf, Joseph Teran, and Eftychios Sifakis. 2011. Efficient Elasticity for Character Skinning with Contact and Collisions. ACM Trans. Graph. 30, 4, Article 37 (July 2011), 12 pages. https://doi.org/10.1145/2010324.1964932
 Mescheder et al. (2019) Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. 2019. Occupancy Networks: Learning 3D Reconstruction in Function Space. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR).
 Mitchell et al. (2015) Nathan Mitchell, Mridul Aanjaneya, Rajsekhar Setaluri, and Eftychios Sifakis. 2015. NonManifold Level Sets: A Multivalued Implicit Surface Representation with Applications to SelfCollision Processing. ACM Trans. Graph. 34, 6, Article 247 (Oct. 2015), 9 pages. https://doi.org/10.1145/2816795.2818100
 Panetta et al. (2017) Julian Panetta, Abtin Rahimian, and Denis Zorin. 2017. WorstCase Stress Relief for Microstructures. ACM Trans. Graph. 36, 4, Article 122 (July 2017), 16 pages. https://doi.org/10.1145/3072959.3073649
 Park et al. (2019) Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. 2019. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
 Peng et al. (2004) Jianbo Peng, Daniel Kristjansson, and Denis Zorin. 2004. Interactive Modeling of Topologically Complex Geometric Detail. ACM Trans. Graph. 23, 3 (Aug. 2004), 635–643. https://doi.org/10.1145/1015706.1015773
 RaganKelley et al. (2013) Jonathan RaganKelley, Connelly Barnes, Andrew Adams, Sylvain Paris, Frédo Durand, and Saman Amarasinghe. 2013. Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines. Acm Sigplan Notices 48, 6 (2013), 519–530.
 Sethi et al. (2012) Manu Sethi, Anand Rangarajan, and Karthik Gurumoorthy. 2012. The Schrödinger distance transform (SDT) for pointsets and curves. In 2012 IEEE Conference on Computer Vision and Pattern Recognition. 198–205. https://doi.org/10.1109/CVPR.2012.6247676
 Sloan et al. (2001) PeterPike Sloan, William Martin, Amy Gooch, and Bruce Gooch. 2001. The Lit Sphere: A Model for Capturing NPR Shading from Art. In Proceedings of the Graphics Interface 2001 Conference, June 79 2001, Ottawa, Ontario, Canada. 143–150. http://graphicsinterface.org/wpcontent/uploads/gi200117.pdf
 Smith and Schaefer (2015) Jason Smith and Scott Schaefer. 2015. Bijective Parameterization with Free Boundaries. ACM Trans. Graph. 34, 4 (2015).
 Stein et al. (2018) Oded Stein, Eitan Grinspun, Max Wardetzky, and Alec Jacobson. 2018. Natural Boundary Conditions for Smoothing in Geometry Processing. ACM Trans. Graph. 37, 2, Article 23 (May 2018), 13 pages. https://doi.org/10.1145/3186564
 Wyvill et al. (1999) Brian Wyvill, Andrew Guy, and Eric Galin. 1999. Extending the CSG Tree. Warping, Blending and Boolean Operations in an Implicit Surface Modeling System. Computer Graphics Forum 18, 2 (1999), 149–158. https://doi.org/10.1111/14678659.00365 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/14678659.00365
 Zhang et al. (2020) Aston Zhang, Zachary C. Lipton, Mu Li, and Alexander J. Smola. 2020. Dive into Deep Learning. https://d2l.ai.
 Zhao et al. (2001) HongKai Zhao, Stanley Osher, and Ronald Fedkiw. 2001. Fast surface reconstruction using the level set method. In Proceedings IEEE Workshop on Variational and Level Set Methods in Computer Vision. IEEE, 194–201.
 Zhou and Jacobson (2016) Qingnan Zhou and Alec Jacobson. 2016. Thingi10K: A Dataset of 10,000 3DPrinting Models. arXiv preprint arXiv:1605.04797 (2016).
Appendix A Proof of Underestimate Property
Here we prove that smooth distances underestimate the true distance, when each distance contribution has an associated weight function.
Theorem A.1 ().
Let be the data mesh, and let be a query primitive, and each has an associated weight function , where for some constant . Suppose . Then, defining as in Eq. 3 (but using a general query primitive instead of a point), we have for all .
Proof.
Let . Using this notation, . Then,
()  
()  
()  
Therefore, we have . ∎
Appendix B Exact Distance Formulation
Here we discuss our distance formulation between simplices and . Denoting barycentric coordinates of as and as , the points on each simplex referenced by the barycentric coordinates are denoted as and respectively. (For points, the barycentric coordinate vector is 0dimensional, so the barycentric coordinate vector can be omitted.) For example, if is a triangle consisting of 3 vertices , and , . Furthermore, we are restricted to convex combinations of (i.e., , and ). Since any point on a simplex can be referenced using barycentric coordinates, we can determine the closest point pair on and by minimizing the squared distance between all points on each simplex:
(9) 
where and are the argmin of the problem. Combined with the aforementioned constraints on and , we have a quadratic program with linear constraints (and in the case of points, the problem simplifies to the norm).
We are also interested in taking derivatives of this distance. It is wellknown that quadratic programs are differentiable (Hadigheh et al., 2007), and in this case, the distance gradient (with respect to ) is equivalent to the distance gradient of the closest point pair and with respect to , which is
(10) 
Appendix C Weight Function Case Analysis
Here we derive the desired weights in various cases depending on the location of the closest point on a data mesh to point . First, suppose the closest point is on a vertex . Looking at , and assuming is large, we have
In the above derivation, denotes the set of onering neighbours of , and the second line is a consequence of high making the other terms in the summation negligible. Our goal is to ensure that , which is equivalent to the condition . We can accomplish this by setting if ’s closest point to is its vertex (which will be the case for all when ’s closest point on is ). When is a triangle mesh and the closest point is along an edge , we can use an identical argument to derive where denotes the set of triangles incident on . Of course, these weights do not satisfy the property , so these conditions actually apply to an unscaled weight function (see Section 3.3).
Now suppose the closest point is on a simplex (but not on one of its faces, in which case the earlier disussion applies). Again assuming is large, we have
In this case, we can simply set .
Appendix D Far Field Approximation
Here we describe the BarnesHut far field approximation in more detail. Let be a bounding region of points, be the number of points in , be the diameter of (e.g., the bounding box diagonal), and be the center of the far field approximation of . If for some userdefined , then approximate the exponential summation over points as . Similarly, the gradient contribution of (in the numerator summation of Eq. 4) is now . When our data primitives are edges or triangles, we must take some care with the weight function, which must be incorporated to ensure the approximation is reasonably smooth. Since the approximation is just a single point, there is no undesirable concentration in some regions and we can simply use a weight of , which corresponds to a constant weight function scaled by and attenuated by .
Viewing the far field approximation as a Taylor series, we have only included the constant term. However, we found that higherorder Taylor series terms tended to produce worse results due to the offcenter expansion point amplifying the error of those terms.
Appendix E GPU Benchmark
Here we present the results of our GPU benchmark. Structurally, it is identical to the benchmark in Section 4.4, but it is run on a GPU instead of several CPU threads. We ran these tests on a Titan RTX, and implemented our method in CUDA by ensuring our CPU code was also GPUcompatible (though. The results are presented in Fig. 18. We see that, while performance is still linearly proportional to leaves visited, and the GPU code consistently outperforms CPU code by staying over the speedup=1 line and achieving an average speedup on points and speedup on edges and triangles, the relative speedups vary greatly, and are significantly reduced as more leaves are visited. This is likely because global memory accesses become a significant issue in these cases, on top of the additional computation. This is an especially pronounced issue on GPUs since memory is much slower to access on a GPU than a CPU.