Skeletonization via Local Separators

07/07/2020
by   Andreas Bærentzen, et al.
0

We propose a new algorithm for curve skeleton computation which differs from previous algorithms by being based on the notion of local separators. The main benefits of this approach are that it is able to capture relatively fine details and that it works robustly on a range of shape representations. Specifically, our method works on shape representations that can be construed as a spatially embedded graphs. Such representations include meshes, volumetric shapes, and graphs computed from point clouds. We describe a simple pipeline where geometric data is initially converted to a graph, optionally simplified, local separators are computed and selected, and finally a skeleton is constructed. We test our pipeline on polygonal meshes, volumetric shapes, and point clouds. Finally, we compare our results to other methods for skeletonization according to performance and quality.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 6

page 7

page 11

page 17

page 18

page 19

page 21

10/16/2017

Robust Structure-based Shape Correspondence

We present a robust method to find region-level correspondences between ...
06/09/2021

Interactive Modelling of Volumetric Musculoskeletal Anatomy

We present a new approach for modelling musculoskeletal anatomy. Unlike ...
12/26/2019

Skeleton Extraction from 3D Point Clouds by Decomposing the Object into Parts

Decomposing a point cloud into its components and extracting curve skele...
11/03/2020

Learning Deformable Tetrahedral Meshes for 3D Reconstruction

3D shape representations that accommodate learning-based 3D reconstructi...
04/07/2021

Universal Spectral Adversarial Attacks for Deformable Shapes

Machine learning models are known to be vulnerable to adversarial attack...
11/29/2017

Compression for Smooth Shape Analysis

Most 3D shape analysis methods use triangular meshes to discretize both ...
08/22/2019

Predicting Animation Skeletons for 3D Articulated Models via Volumetric Nets

We present a learning method for predicting animation skeletons for inpu...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many of the common shape representations that we use in computer graphics are, in essence, spatially embedded graphs. The polygonal mesh representation is an obvious example since a polygonal mesh is usually understood to be a graph embeddable in a surface of appropriate genus. Moreover, a voxel grid can also be seen as a graph, and it is easy to create a graph from a point cloud by connecting each point to nearby points. Thus, it is not surprising that graph algorithms are often useful in geometry processing, but we also rely on, and enforce, more specific constraints on the representations. For instance, since we tend to require manifoldness of surface meshes, we effectively impose strict limitations on the connectivity: if we were to connect two arbitrary, hitherto unconnected, vertices in a triangle mesh, the result would still be a spatially embedded graph, but it would not be manifold – actually, it would not even be a triangle mesh due to the added edge. For many mesh algorithms, that would invalidate the object as input. The morale seems to be that we usually have to abide by the constraints of the geometry representation. However, there are cases where we can relax these constraints and create algorithms which operate on the broader class of spatially embedded graphs. The benefit of this would be that we might obtain algorithms applicable to a wider range of inputs.

Figure 1: For a simple 2D graph (left), we find the local separators shown as colored vertices and edges (middle). The separators are then replaced with vertices, thereby forming the skeleton (right).

In this paper, we propose such an algorithm which computes curve skeletons from spatially embedded graphs. This algorithm applies to a range of representations from triangle meshes over voxel grids to graphs constructed from scattered points. Moreover, for a given shape, we may have a different situation in different places: it is quite possible that thin structures in one place are represented by a sequence of connected nodes that already resemble a skeleton, while thicker structures are represented by a different part of the same graph where the points lie in a 2-manifold. A very simple example of our algorithm applied to a 2D toy example is shown in Figure 1 and Figure 2 shows the result of our algorithm applied to the problem of reconstructing a tree from a point cloud. Initialy, we construct a graph from the point cloud, then we compute a skeleton using the proposed method, and, finally, the skeleton is garbed using convolution surfaces [BS91] producing the model shown on the right.

Figure 2: Skeletonization and reconstruction of a botanical tree. The image on the left is the input graph, and the inset shows a close-up where the individual edges are easier to see. The middle image shows the skeleton produced by our algorithm, and the image on the right is a reconstruction of the tree using convolution surfaces.

1.1 Contributions and Overview

Unlike the notion of a medial surface, a precise definition of a curve skeleton is elusive, but a number of properties are generally agreed upon. In particular, a curve skeleton is understood to be a locally centered shape abstraction such as could be obtained from a given shape by a process of continuous contraction until we arrive at a 1D structure. Given a point on a curve skeleton, we can identify a set of points on the original shape which contracts to precisely this point on the skeleton. We can think of this set as a skeletal atom. Now, if we consider a part of the shape whose corresponding sub-skeleton does not contain any loops, a skeletal atom contained within this part can be construed as a separator: its removal will disconnect the part (cf. Figure 3).

Figure 3: This figure shows a shape (black outline) and its skeleton (blue). The skeletal atom (red curve) contracts to the point on the skeleton shown as a red disk. Note that the skeletal atom is a separator in the part of the shape indicated by the green region.

We will use the term local separator to clarify that a skeletal atom is not necessarily a separator for the entire shape. It should also be emphasized that since our method operates on discrete shapes, we are looking for discrete separators. Fortunately, the notion of a separator is well known in graph theory, and casting the search for skeletal atoms as a search for vertex separators in a graph is what allows us to generalize the skeletonization process to any object that can be represented as a spatially embedded graph. As mentioned above, this allows us to skeletonize meshes, voxel grids, and graphs constructed by connecting points in space to other points in their vicinity such as the tree shown in Figure 2. It bears mentioning that this type of data does not define a precise surface, but our method can still compute a skeleton because it is based only on the position and connectivity of the vertices in the input graph.

Normally, the search for skeletal atoms is coupled with the process of finding the skeletal structure itself. For instance, if we find the skeleton by a process of contraction, thinning, or segmentation, the skeletal atoms are generally found on the same cadence as their connectivity. Using the local separator property, we can find skeletal atoms independently of how they are connected. Of course, we also need geometric criteria for whether a separator is a useful skeletal atom, but we can cast a wider net in our search.

Our first contribution is a method which finds local separators in a spatially embedded graph. We propose an algorithm which, starting from a single vertex, grows a connected set of vertices until this set becomes a separator. Subsequently, we shrink the separator again until it becomes a minimal separator. This method is discussed in Section 2.1.

Our second contribution is a procedure for selecting a non-over- lapping subset of the found separators. This is achieved by casting the selection as a weighted set packing problem. The fact that the separators are minimal allows us to pack them far more densely than otherwise, obtaining a skeleton that appears to often resolve comparatively fine details. The approach is discussed in Section 2.2.

While our method for finding local separators leads to a good result, it would not be hard to come up with other methods than the one proposed. For instance, methods based on Reeb graphs could be used to generate multiple skeletons which we can see as selections of local separators, and then these skeletons can be blended using the packing method. An example of this is shown in Figure 11.

From the packed set of separators, we can easily extract a skeleton as discussed in Section 2.3. The result of local separator finding and packing as well as skeleton extraction is shown in Figure 1 for a simple 2D input graph.

1.2 Related Work

Blum defined the medial axis of a given shape as the locus of points where a wave propagating uniformly in all directions from the boundary collides with itself [BLU67]. In 2D this locus is a collection of curves, and in 3D it is a collection of surface components denoted the medial surface [SP08]. The medial surface (or axis) can also be defined in a number of other ways, and these several definitions have given rise to a range of methods for computing the medial surfaces of a shape [TDS+16, SBd16]. Almost all of the methods are somewhat sensitive to noise, but in recent work Rebain et al. proposed an algorithm that produces approximately inscribed maximal balls from an unorganized surface point cloud [RAV+19].

An important property of the medial surface is that it is, in principle, invertible allowing us to reconstruct the shape. However, for many applications, such as creating armatures for animation, reconstruction of botanical objects, or computing shape descriptors, it is of greater utility to obtain a curve skeleton [CSM07], and since this is our concern, we will focus on curve skeletons in the following.

There is no universally accepted definition of curve skeletons, but arguing that it should lie in the medial surface, Dey and Sun [DS06] proposed an approach where the medial surface is found first, and then the curve skeleton is subsequently found as the set of critical points of the function that measures the distance between the points touched by the maximal ball at each point of the medial surface. This is effective, but requires a well defined surface since the geodesic distance function is used. A faster algorithm which also makes it possible to simplify the medial surface to a curve structure was later proposed by Yan et al. [YSC+16] based on the notion of erosion thickness. While these two works define the skeleton in terms of the medial surface, our method does not rely on the medial surface or require that the input is a manifold surface.

Au et al. [ATC+08] and later Tagliasacchi et al. [TAO+12] provide algorithms based on mean curvature flow. Tagliasacchi et al. also let the skeleton be attracted by the medial surface. Again, these approaches require a manifold surface mesh. A related approach is that of Zhou et al. [ZYH+15], who propose an algorithm for computing the generalized cylinder decomposition of 3D shapes. The cylinder decomposition is highly related to the straight edge skeleton, in the sense that the central axes of cylinders correspond to the edges of skeletons. Jiang et al. [JXC+13]

follow the strategy of combined clustering (of mesh vertices) and contraction (of skeletal edges) while maintaining a 1-1 relation between clusters and skeletal vertices. Like Jiang et al. we also find a partitioning of the input vertices, but their clustering approach seems to lead to larger clusters and hence a coarser skeleton. Using machine learning, Xu et al. recently employed volumetric deep learning to infer skeletons for animation from 3D shapes

[XZK+19].

For volumetric images, there are established methods based on mathematical morphology [SER83] for both medial surfaces and skeletal curves [SBd17]. An example is the thinning approach by Lee et al. [LKC94] which converts a binary image to a corresponding binary image of the skeleton.

For point clouds, the literature is more sparse. Tagliasacchi et al. find points of approximate rotational symmetry from incomplete point clouds and reconstruct a skeleton from a collection of these [TZC09]. Huang et al. also compute skeletons from point clouds without connectivity [HWC+13]

. Their algorithm initially find medial points which minimize a weighted sum of distances to the input points. A second energy term is used to repel these points in order to avoid clustering. The use of the L1 norm helps reduce the influence of outliers. A notion related to point cloud skeletons is that of an Euclidean Steiner tree 

[JK34, KN01]; i.e. the minimal weight tree spanning the points of the point cloud. Conceivably, a Steiner tree could be pruned to compute the skeleton of a point cloud, but it seems infeasible due to the computational complexity of the approach.

The Reeb graph is an established tool in shape analysis [BGS+08]. Intuitively, a Reeb graph is constructed by contracting connected components of isocontours of a given height function to a single point. Doing so for the entire shape leads to a structure which is closely related to the notion of a curve skeleton. Tierny et al. find a number of feature points and then use geodesic distance to closest feature point as the height function in a scheme that finds discrete contours based on which they construct a discrete Reeb graph [TVD08]. These discrete contours could be used as local separators in our scheme, but, unfortunately, Reeb graphs are often suboptimal skeletons, having junctions very close to the surface. In a slightly similar effort, Dey et al. propose a method that finds discrete contours based on Reeb graphs and use it to identify handle and tunnel loops [DFW13].

Several authors share our interest in reconstructing botanical trees by first computing a skeleton from images [QLZ+18] or point clouds [LYO+10, PRE12, GZL+19] and then reconstructing the tree surface from the skeleton.

1.3 Preliminaries

Since we operate on discrete input, we will assume that our shape has been sampled, producing a set of vertices, , and that a geometric position, , is associated with each vertex . We will not make any assumptions about whether the vertices are sampled from the surface (i.e. boundary) of the shape or from the interior, but we do require that the vertices are connected by edges, , thus forming a graph . To exemplify, the graph could be an embedded graph as in the case of a manifold triangle mesh, but it could also be a k-nearest neighbor graph defined on a set of scattered points or something else.

1.3.1 Separator and Local Separator

Given a graph , a separator is a subset of vertices with the property that its removal will disconnect the graph into at least two non-empty sets. The size of a separator is simply the number of vertices it contains, and we say that a separator is minimal if removing any of its vertices would result in it no longer being a separator.

Say our shape is a cylinder, and let the graph vertices form a regular quad mesh where edges are either parallel to the axis of the cylinder or perpendicular to the axis. Observe that the perpendicular edges form rings around the cylinder, and the vertices connected by these edges separate the vertices on either side of the rings. For a cylinder, it seems natural to define the skeleton by connecting the centers of these rings. Thus, some separators relate to the notion of a skeleton in a useful way, but not all separators are useful. For instance, the neighbors of any vertex is a separator that separates said vertex from the other vertices in the graph, but it is rarely useful. It is also clear that separators do not easily describe skeletons of higher genus surfaces; to overcome this challenge, we turn to local separators.

Given a graph, , an induced subgraph consists of a subset of the vertices of , and all of the edges in that connect a pair of vertices in . Its boundary consists of those vertices of that are linked to the rest of , that is, those that have edges to . Given a graph, , a local separator is a separator, , of some connected induced subgraph with the further property that no boundary vertices belong to the separator, , and that is disconnected by the separator. The main advantage of local separators is that they give information about the structure even on higher genus surfaces. As a welcome secondary advantage, they are more computationally efficient to find. As in the case of normal separators, we can very easily find local separators that bear no meaningful relation to the skeleton of a shape. However, given an algorithm that finds appropriate local separators, we can hope to obtain a meaningful skeleton. Such an algorithm will be described in Section 2. Note that since we only refer to local separators in the following, the word “separator” should be taken to mean “local separator”.

1.3.2 Discrete Curve Skeleton

Given a discrete shape represented through a graph, , a discrete curve skeleton, , is a graph where each vertex either corresponds to a local separator of or is an auxiliary branch vertex which joins several vertices. (In a sense, these auxiliary vertices are formed to replace a hyper-edge by a star graph.)

Desirable properties of a skeleton include: (i) It captures the homology of the shape (i.e. it is a deformation retract), (ii) it captures the geometry of the shape, (ii’) including all features, while still (ii”) handling noise consistently, and (iii) it is centered in the shape, in particular, (iii’) the skeleton bones are contained within the “meat” of the shape. We say a skeleton is good if it has these properties. If the skeleton successfully lives up to the third criterion, it will be the case that (iii”) the skeleton is as smooth as the original shape.

Note that there is a fine balance between being robust against noise, and being able to capture all features.

1.3.3 Reconstruction Conditions

Several methods for skeletonization operate by contracting the shape towards the skeleton. This entails the risk that some features may by missed or that noise might be represented as features. It depends on the speed of contraction, and usually there are operating parameters which the user will have to adjust to stay clear of these issues. To a large extent, we sidestep the problem by not contracting the shape. Our method operates by finding sets of vertices, i.e. separators, such that any path from the tip to the base of a feature must pass through this cluster. This allows us to facilitate a relatively broad range of inputs as long as they are in the form of spatially embedded graphs. Specifically, our algorithm works well both if vertices are sampled only from the surface and if they are also sampled in the interior.

However, there are some implicit conditions necessary to ensure that we obtain a good skeleton as defined above in Section 1.3.2. Note that the number of local separators we can find depends on how connected the graph is. For instance, a complete graph cannot have a separator (local or global) since all vertices are connected. Thus, we need the right amount of edges in order to find a collection of local separators which produces the desired skeleton.

This reasoning leads to the following reconstruction condition (illustrated in Figure 4) which assumes that we locally know the (continuous) skeleton and wish to discover whether the connectivity is sufficient to reconstruct (the topology of) this skeleton from the graph.

Figure 4: On the left, the skeleton for a subgraph (represented by all the filled circles) is known a priori to contain no cycles (pink curve). Yet, the smaller subgraph (yellow and red vertices) does admit a separator (red) which is not a separator in the containing subgraph. Hence the sufficient connectivity condition is violated unless the edges indicated by dashed lines are included. On the right, the skeleton is known to contain a cycle, but now the red dot doesn’t separate any nodes, and the connectivity is excessive.

Say we are given a subgraph, , whose edges and vertices are known to be sampled from a coherent region for which the corresponding part of the (continuous) curve skeleton is known to contain no cycle. If there is a local separator, , belonging to a smaller subgraph then must also be a local separator of . If this is true, the connectivity is sufficient, otherwise it is insufficient.

To understand why this condition must be fulfilled, consider the opposite: if the larger region does not have as a separator, we can traverse vertices of to go from one vertex of to another vertex of from which the first one would have been separated by if the path had been restricted to , but since the part of the skeleton corresponding to is not supposed to contain a cycle, this should not have been possible.

We can use the same condition with different assumptions in order to test for excessive connectivity. Assume the part of the shape covered by contains precisely one loop, but there is no induced subgraph which contains a separator that is not also a separator of , then the connectivity is excessive because the supposed cycle in the skeleton cannot be recovered by a collection of separators.

2 The Local Separator Skeletonization Algorithm

The algorithm accepts a graph as input and produces a discrete curve skeleton in the form of another graph as output. Each vertex of the output graph corresponds to a local separator of the input graph. Hence, we can think of the local separators as skeletal atoms, and skeletonization largely becomes the process of searching for a collection of local separators.

Informally, our method works as follows. Initially, we sample vertices on the input graph and grow a region around each sampled vertex. At the point where the vertices just outside a given region form several disjoint components, that region is a separator which we then shrink back until it becomes minimal. This procedure is described in detail in Section 2.1. Our method for finding separators produces a large set of overlapping separators, and we use set packing to obtain a smaller set of disjoint separators. This is described in Section 2.2. Finally, the skeleton is produced as the quotient graph with respect to a partitioning induced by the separators as we discuss in Section 2.3. The three steps are also illustrated in Figure 5.

Figure 5: The three steps of the skeletonization process is shown here for a detail (a hand) from the Armadillo model. From left to right the images show the original graph, all the found separators, the packed separators, and the output skeleton. Colors indicate quality (red means high quality, blue means low quality).

2.1 Computing a Local Separator

Figure 6: The algorithms for computing a single local separator. A-G illustrate the operation of Algorithm 1. The color red indicates a separator vertex, yellow a front vertex, and green is used for the front vertex closest to the sphere center Image A shows the situation after we have found the neighbours (yellow and green) of start vertex, (red) and the closest of the neighbors (green). We now use and the closest vertex to redefine the sphere so it precisely contains both vertices. In B we see how all neighbors of the previously green sphere are now added to the front and the sphere redefined again. Images C through G illustrate how more and more vertices are added to both the front and separator until the front is split into two components. At this point the separator is in fact a local separator. Image H shows the separator after it has been thinned using Algorithm 2.

The algorithm for computing local separators is based on region growing. We assign an initial vertex to the separator and add all its neighbors to the front which we denote . Once a vertex has been added to any neighbors of the added vertex which are not in are added to unless they already belong to . Thus, at any time, is simply the set of vertices not in but neighbors to vertices in .

Instead of growing evenly in all directions, we maintain a bounding sphere around , and, in each step, we find the vertex in that is closest to the sphere and add it to while expanding the sphere to exactly contain the vertex. It is worth pointing out that the vertex in closest to the sphere center may be inside the sphere already. In this case the sphere is unchanged.

The algorithm stops when the front consists of two (or more) connected components as illustrated in Figure 6 (G). At this point, is a separator of the subgraph , and thus a local separator according to our definition from Section 1.3.1, and . The benefit of growing the separator using this approach is that it favors going around as opposed to going along features. The center of the expanding sphere that we use to guide the region growing will be used also in the next step when we shrink the separator to a minimal separator, and its radius provides a hint as to local feature size. However, the sphere is not a minimal bounding sphere of or possessed of other specific properties of which we are aware.

The region growing is illustrated in Figure 6 (A-G).

       is    centered in        radius        

Algorithm 1 Local Separator.
The arguments are the graph , the initial vertex, , and a threshold, . computes the ratio of the cardinalities of the smallest set to the largest set in a collection of sets passed to the function. computes the connected components of the vertex set given the connectivity of the graph . is a small constant preventing division by zero.

If the algorithm is invoked for a valence 1 vertex (i.e. leaf), it immediately returns a separator consisting only of this vertex. Thus, leaf vertices are defined to be local separators. This is because the skeletonization is designed to be idempotent: applying skeletonization to a skeleton should not change it. Since the interior vertices of a skeleton are separators unless the skeleton contains cliques, this property can be attained simply by defining leaves to be separators. The algorithm also stops and returns an empty separator if the front is empty at any point since, in this case, we have flooded an entire connected component.

If neither condition is met, the algorithm continues while consists of a single significant connected component. In this context, “significant” means that the we also continue if the size of the smallest component is very small compared to the largest. Having found two significant components, the algorithm stops, and is first shrunk to a minimal separator (using an approach described below) and then returned, unless the shrunk consists of more than a single connected component. If this happens, we return an empty separator. Before returning a non-empty separator, we compute the quality which is simply the ratio of the smallest front component of to the largest. Pseudo-code for is shown in Algorithm 1.

2.1.1 Shrinking a Separator

where

Algorithm 2 Shrink Separator.
The arguments are the graph , the separator, , and the front components . computes the Laplacian smoothing of the vertices that belong to keeping the vertices in the rest of fixed and returns the smoothed positions.

We want to find a separator that is minimal, but also smooth and balanced. The separator minimization is performed by iteratively removing vertices from the separator until no vertex can be removed. The result clearly depends on the order in which we remove vertices from the separator.

A simple heuristic would be to remove vertices in order of decreasing distance to the center of the sphere used to find the separator in the first place (as discussed above). While this is effective, it has the deficiency that for irregular structures, this can lead to somewhat meandering separators. We initially perform Laplacian smoothing of the initial separator,

, using inverse edge length as the weighting scheme and with fixed positions for the vertices of . Having smoothed the separator, we remove vertices in order of decreasing distance to the sphere center until the separator is minimal. There are a few cases where the shrunk separator breaks into disjoint components. We test for and discard these results. Pseudo-code for the function, is shown in Algorithm 2.

2.1.2 Optimizing Minimal Separators

A minimal separator is optimal in the sense that no vertex can be removed from it without it ceasing to be a separator. However, we can also measure other qualities in a separator such as the sum of the length of graph edges connecting two vertices that both belong to the separator. Below, we describe an optional step that can be used to optimize minimal separators in this sense.

It is possible to create a variation of a minimal separator, , by exchanging a vertex, , with a neighbor vertex, , as long as

  • does not have other neighbors that belong to the same front component as , and

  • .

  • belongs to the original (unminimized separator)

The first of these three conditions ensures that we do not introduce tunnels through the separator. The second condition ensures that has the same neighbors in as . This translates into a more gradual change to the separator which we have observed leads to better results. The final condition ensures that we do not include vertices from outside the subgraph created by the search in Algorithm 1 (before Algorithm 2) as this could invalidate the separator.

Abiding by the rules above, we can now perform substitutions in order to minimize some particular measure such as the summed length of edges belonging to the separator, i.e

(1)

We implemented an algorithm that optimizes a separator by performing any substitution that reduces (1) until no further substitutions will reduce the energy.

This algorithm is applied once to all minimal separators. If it does improve the separator, we perturb it slightly by making random substitutions and then run the optimization again. This procedure is iterated a fixed, small number of times. Each time, we backtrack if the procedure does not improve the separator.

2.1.3 Run-Time Analysis

The time consumption for finding a separator can be analyzed in two terms. Let be the total number of vertices in returned by Algorithm 1. First, we spend steps adding one vertex at a time to the separator. In each step, we spend time proportional to the current size of the front performing a linear scan for the closest vertex to the current center, and furthermore, time proportional to the number of edges between vertices of the front finding the connected components. Then, we spend time per round of diffusion, for a total of roughly rounds. Thus, the time consumption becomes , where and are the number of vertices and, respectively, edges of the induced subgraph , and is the largest front through the course of the algorithm. Thus, in general and in worst-case, we can make no better analysis than , since the front may be proportional to the separator, and may have many edges. However, we do assume that the points are sampled somewhat evenly from a surface or a shape in 3D space. If the points are sampled somewhat evenly from a surface, the front size is expected to be , while if points stem from a shape, the front size is expected to be . Furthermore, if points stem from a mesh or a voxel grid, the number of edges in the subgraph induced by the front is indeed proportional to the number of vertices; indeed, not only will the subgraph be planar, but voxel grids are constant degree by construction. Thus, for meshes, we expect a running time of to find one separator, and for volumetric graphs, we expect a running time of . For point clouds where vertices have bounded degree, the running time will correspond to that of meshes or voxels, depending on whether the points are sampled from the surface or the iterior of the shape.

2.1.4 Sampling

Since the algorithm for finding a local separator starts from a single vertex of , we need to first choose a set of initial vertices. Clearly, one option is to start the search from all vertices of , but we can improve performance dramatically by sampling. However, simply choosing a random subset of the vertices would lead to small features being missed. Broadly speaking, we want many separators to choose from in order to find a dense packing later.

In our experience, a good compromise is to visit all vertices in random order and the use vertex as starting point for Algorithm 1

with probability

where is the number of times has been included in a separator. This heuristic prioritizes the more sparsely covered regions, but, of course, it does not prevent that some vertices could be sampled repeatedly: that the separator found starting from contains does not imply the converse; for some inputs there may be “popular” vertices that belong to many separators even if they are not used as starting point many times.

The total running time of the separator finding step thus becomes the product of the time to find a separator with the number of vertices sampled: , where is the sampling probability for , is the separator ball found from , and is the time consumption to find the separator which is a function of the data set type (be it voxel, mesh, or point cloud graph). Intuitively, especially for locally cylindrical shapes, the probability of sampling should decrease as the size of the local separator through increases, leading to a better total running time.

2.2 Packing Local Separators

The local separators found using the above method correspond, roughly, to cross sections of the shape. It is obvious to consider each separator to be a skeletal atom and map it to a vertex of a discrete curve skeleton. However, the separators overlap which makes it difficult to infer a reasonable connectivity for such a skeleton. Our solution is to pack the separators on the graph. In a packing, each vertex can be covered by only a single separator. Thus, we obtain a partitioning of the vertices in the graph into non-overlapping sets - plus an additional set of unassigned vertices, but these can be trivially assigned to the closest separator. From this partitioning, we can easily extract a skeleton using a method described in Section 2.3.

Set packing is itself an NP-hard problem: Deciding whether some given number of sets from a family exist that cover the entire universe, or optimizing for how few are needed. Even for subsets of size , no better guarantee than a -approximation is known [HS89], and this is tight up to -factors [HSS06]. However, for our purposes, a greedy heuristic often yields good results in practice.

We use a greedy, weighted packing scheme [KOR13] since this allows us to prioritize the best separators. While several weighting schemes were considered [CHV79, KOR13], we have chosen a weight based on how balanced the local separator is. Specifically, once the separators have been minimized using Algorithm 2, we compute the ratio of the smallest to the largest front component meaning that balanced separators are prioritized. The weight is returned from Algorithm 1 together with the separator itself.

The heuristic takes subsets (in our case, separators) of size and does the following three steps: First, it computes a weighted redundancy count for each separator: how many other separator does it intersect and by how much. Then, it sorts them according to redundancy. Finally, it greedily adds sets of increasing redundancy. The running time is thus , that is, dominated by the first term corresponding to the first step. As future work, we could parallelize the computation behind the first term, leading to an time algorithm assuming

processors. Also, one could use a MinHash algorithm to estimate set resemblance 

[BRO97], in order to speed up computation time for the subroutine.

In our case, we will have at most different separators, one for each vertex of the graph, of sizes that are worst-case , leading to an running time. However, in practice, for evenly spaced sample points along a surface or within a shape, the smallest separators through a point would be (assuming bounded genus), and even less for lanky shapes such as a botanical tree (Figure 16) or foam (Figure 15).

2.3 Skeleton Extraction

Figure 7: Steps of the skeleton extraction. The original (minimal) local separators (a), the maximized separators (b), the quotient graph (c), and the output (d) where each clique complex (bold colored dots) is replaced by a star graph with the cluster vertices as leaves, and a new vertex as center.

From a packing of local separators it is now straight forward to compute the actual skeleton. Initially, we maximize the separators by assigning all vertices that do not already belong to a separator to their closest separator in terms of distance along the graph edges. Since all vertices are now assigned to a separator, the separators form a partitioning of the graph, and the skeleton is computed as the quotient graph by the equivalence relation that two vertices belong to the same maximized separator [SS12]. The position of each vertex of the quotient graph is simply the average of the positions of all of the vertices in the associated separator (i.e. equivalence class).

The quotient graph is close to the final skeleton, but it contains cliques of size and complexes of such cliques. The cliques arise at junctions where several branches meet, leading to several vertices in the quotient graph that are all connected as exemplified in Figure 5 (b,c). Often these cliques are of size three meaning that the separators on three branches are all connected to each other. Moreover, in a complex junction where more than three branches meet, the cliques will share sub-cliques, thus forming complexes. For instance, in Figure 5 (b) we observe that there are separators on each finger and on the hand. In the quotient graph (c), the separators are now vertices, and the connectivity of the vertices reflects the connectivity of the separators, forming a complex of three cliques.

The final step of skeletonization consists of removing clique complexes from the quotient graph. To do so, we find all cliques of size three. Next, we compute the pairwise intersections between all cliques, merging those which share a clique of size two. All edges in the resulting complexes are removed, and the complex is replaced with a single vertex which is then connected to all vertices of the complex. An example is shown in Figure 7 (d).

Clearly, the search for -cliques dominates the running time for skeleton extraction, and thus, it is quadratic in the number of packed separators, or, equivalently, quadratic in the number of vertices in the final skeleton.

2.4 Skeleton Smoothing

In some cases, it is important to get a smooth skeleton, and our scheme has no inherent smoothness since the position of each skeletal vertex is computed independently of other skeletal vertices as the average position of the associated separator vertices. Figure 10 (a & b) show examples where a very smooth skeleton results, but the leftmost images of Figure 10 (c & d) show two cases where the resulting skeleton is less smooth.

To (optionally) smooth the output skeleton, we use a simple weighted Laplacian smoothing scheme

(2)

where is the position of vertex , is the set of neighbors, and is the weight. The weight is given by the number of vertices in the separator associated with in the input graph. If is a branch vertex that replaces a clique, is simply the average of the weights of the vertices in the clique.

The logic behind this scheme is that we want high weight vertices to attract more but also lower valency vertices since leaf vertices in particular should not be smoothed too much as that could lead to a loss of fine detail. Clearly, this smoothing scheme can be applied iteratively, depending on the desired level of smoothness as illustrated in Figure 10 (c & d).

3 Results and Discussion

We tested our approach on three different types of input: 2-manifold polygonal meshes, volumetric grids and point data. While the algorithm runs unchanged on all three, some preprocessing is required for other inputs than triangle meshes. In the following, we first describe some implementation details and then relate the results of our experiments.

3.1 Implementation

The graph skeletonization algorithm is implemented in C++. Graph and mesh data structures are based on our open source XX library (omitted for anonymity). The graph data structure is implemented using adjacency lists.

To speed up computations, we parallelize the sampling process using the threading facilities of C++. Since the decision of whether to compute a separator from a given vertex, , depends on how many separators already include that decision consequently depends on thread scheduling. This entails that the program is not fully deterministic; there is a slight variation between runs. However, in all cases, the results shown are those produced by a single run of the algorithm - not the best result from several runs.

Our method relies only on one parameter: the quality threshold, , in Algorithm 1 which was set to the empirically chosen 0.0875 in all experiments. Sampling was used in separator finding in all cases except Figure 1.

In our comparisons with other methods, we used the CGAL implementation of Mean Curvature Skeletons [TAO+12] and the binary kindly provided by the authors to compare with L-1 Medial Skeletons [HWC+13].

Except where noted, tests were run on a late 2018 Mac Mini (3.2 GHz Intel Core i7-8700B) running MacOS Catalina.

3.2 Surface Meshes

Input Skeleton Leaf Branch Run Time
Vertices Vertices Vertices Vertices (secs)
LS MCF LS MCF LS MCF LS MCF
0.1 1.0 0.1 1.0 0.1 1.0 0.1 1.0
Human hand 12438 120 349 271 6 6 6 3 4 4 87.82 14.97 84.00
M18020 75120 2719 2973 3510 0 2 2 40 80 67 24.20 51.49 60.43
Fertility 24994 181 523 483 3 0 0 7 6 5 534.18 38.26 229.56
Wood Sculpture 19803 241 501 426 7 5 3 11 10 9 141.15 22.90 140.86
Asian Dragon 47701 1276 838 720 83 35 29 29 33 27 512.39 44.04 274.99
Neptune 28052 491 606 569 27 9 12 22 11 16 247.42 27.82 141.26
Rocker Arm 10803 67 230 183 4 2 2 4 2 2 107.33 19.01 86.76
M22081 3970 130 301 290 4 4 3 10 11 11 5.92 4.18 4.50
Frog 37225 483 820 712 23 23 22 14 20 18 1394.70 57.64 388.14
Spider 4675 212 29 1 1.71
Noisy Dinosaur 23302 240 487 13 34 7 4 299.63 26.78
M21362 1828 342 4 36 23 1 8 1 1 3 8.76 2.65 29.77
Armadillo 6488 270 402 372 20 10 14 9 7 12 11.45 9.28 67.07
idem (noisy) 6488 315 347 346 21 8 13 8 5 8 13.61 9.25 74.22
idem (noisier) 6488 494 348 279 36 8 10 11 6 6 17.61 9.04 73.02
Table 1: Summary of the results of triangle mesh skeletonization using our method and the mean curvature [TAO+12] method (MCF) as implemented in CGAL. We compared MCF using (default) and .
Figure 8: Comparison of triangle mesh skeletonization using our method and the mean curvature [TAO+12] method (MCF) as implemented in CGAL. Each mesh is shown shaded on the left, skeletonized with our method (center left) and using MCF on the right (with center right and far right). The models are divided into Groups A through D depending on the outcome for MCF and LS.

We compared our method (LS) to mean curvature (MCF) skeletonization [TAO+12] using the CGAL implementation. The method exposes the parameter which “controls the velocity of movement and approximation quality” according to the CGAL documentation. We ran our experiments with (default) and with which is slower, but, in some cases, improves quality. The comparison was performed on a selection of meshes (many commonly used) of varying complexity, genus, and level of geometric detail. The results are summarized in Table 1.

General Comparison

A number of the output skeletons are shown in Figure 8, and we have divided the results into four groups which are discussed below. We have assessed them against the criteria sketched in Section 1.3.2, and against the additional criterion that some skeleton should be computed.

Group A: both win

contains the human hand and a synthetic mesh. For these two meshes the LS and MCF results are comparable. It may be observed that the MCF result is smoother in the case of the human hand. However, while smoothing is an intrinsic (and not always desired) part of the MCF method, we can easily obtain a smoother output from the LS method using the approach described in Section 2.4, and the result of smoothing the human hand is shown in Figure 10(d).

Group B: excessive smoothing

contains several cases where the LS method yields a skeleton that captures more details (criterion ii) and/or with higher fidelity than MCF (criterion iii). In the case of Fertility, Rocker Arm, and Wood Sculpture, the difference is mainly that the LS approach captures small protrusions not captured by the MCF skeleton. In the case of the Asian Dragon (only head shown), significant head appendages are missing from the MCF skeleton. When it comes to the frog, the LS skeleton captures eyes and nostrils but both are missing from the MCF skeleton for and the eyes are missing for .

The results for M22081 show clearly that too much smoothing can lead to loss of precision. The highly regular structure of M22081 is captured quite well by the LS method whereas the horizontal parts are deformed by the MCF method: excessively so for and somewhat less for . In a similar vein, the overall shape of the trident from the Neptune model is captured better by the LS Method, and MCF misses one or more barbs. In the case of the hand of the Neptune statue, we also observe that the skeleton seems to be over-smoothed.

Group C: MCF fails

contains two models where MCF fails to produce an output. The spider model consists of multiple intersecting parts and it contains non-manifold edges which is likely the reason for this failure. The noisy Dinosaur contains a few tiny connected components which may both be the cause of the strange isolated components in the skeleton for and also the reason for the failure in the case of . We note that the LS method produces valid and reasonable output in both cases.

Group D: both fail

consists of a single model, M21362, which consists of large, planar regions defined by very few vertices and exceedingly ill-shaped triangles. Both LS and MCF produce a valid but not a reasonable output according to our criterion (i) that the skeleton should capture the homology of the shape, and probably models like this one can only be handled by resampling the mesh.

Comparison of Surface Reconstructions
Figure 9: The original triangle mesh of the Asian Dragon is shown in the center. On the left is a surface reconstructed from our LS skeleton using convolution surfaces and on the right a similar reconstruction from the MCF skeleton. In both cases the skeleton used for reconstruction is shown above the rendered model.

In order to investigate the difference between LS (our method) and MCF, we applied convolution surface based reconstruction [BS91] to the output from both methods. Since curve skeletonization is not invertible, this will not result in an accurate reconstruction, but it does facilitate a visual comparison of how well each method qualitatively captures the input surface. We chose to do this on the Asian Dragon since this model has numerous fangs, digits and assorted appendages which provide an interesting challenge. To reconstruct a shape from the skeleton, we associate a radius with each node of the skeleton. This radius is computed as the average of the distances from the skeletal node to the associated vertices. In the case of LS, the associated vertices are of course the separator, and MCF reports the vertices that are contracted to each node. The result is shown in Figure 9. It is very clear from the images, that LS captures far more features than MCF: in fact, all digits are captured by the LS skeleton and just a few by the MCF skeleton, and a number of other appendages are also missing from the MCF skeleton. This observation is supported by the data in Table 1 where we observe that the LS skeleton contains between two and three times more leaf vertices than the MCF skeleton. Because the leaf nodes of the skeleton represent a larger part of the input mesh in the case of the MCF reconstruction, the tips of features appear comparatively bulbous in the reconstruction as seen in Figure 9.

Smoothness of Skeletons
Figure 10: a and b show a Dupin cyclide skeletonized using the LS method. In b, a segment of the surface has been removed introducing a boundary (indicated by a red box) and edge contraction has resulted in the model being collapsed to a curve (indicated by the other red box). In both a and b, the graph of the cyclide is shown left with separators color coded, and the resulting skeleton is shown on the right. c and d show the result of a skeletonization of the armadillo (c) and the human hand (d) with 0, 1, 2, and 3 steps of smoothing going fron left to right.

As previously noted, the result of the LS method can trivially be smoothed in a post process. However, this is not always necessary. Figure 10 (a) shows the result of Skeletonizing a Dupin cyclide. The figure shows both the separators (color coded) and the resulting skeleton which is quite smooth due to the regularity of the cyclide and its sampling which together ensure that the obtained separators are perfectly circular. It should be noted that to obtain this result, we apply the optimization (Section 2.1.2) which we do not enable for the other examples, since the effect is only noticeable on a regular structure such as the cyclide.

Figure 10 (b) demonstrates that we can also obtain a very regular skeleton for slightly less perfect input. In this case, edges of the thin part of the cyclide have been contracted (here the separators are individual vertices) and a section of the thick part has been cut out (here the separators are open circular arcs), but the result is still a very regular skeleton.

Figure 10 (c & d) show the Armadillo and the human hand models skeletonized with 0, 1, 2, or 3 steps of smoothing. In both cases, smooth skeletons result. We note that the effect seems to be purely beneficial for the hand, but in the case of the Armadillo, some distortion of fine features takes place, comparable to what we might see using the MCF method: cf. Netune’s hand and trident in Figure 8. We can also compare to the Armadillo skeleton produced by MCF which is shown in Figure 12, but since MCF fails to capture several digits, it is not possible to make a direct comparison.

Blending Reeb Graphs
Figure 11: a,b,c, and d show separators (top) computed from height functions associated with four different directions. The resulting skeletons (below) are tantamount to Reeb graphs for these height functions. e shows the result of packing the "Reeb separators" using our separator packing approach. The result is skeleton that is less anisotropic and better captures the overall shape.

Given a shape, , and a height function, , the Reeb graph of with respect to is the quotient space of under the equivalence relation that two points and are equivalent if they belong to the same connected component of the same level set, i.e. .

Given a function and a spatially embedded graph, we can easily extract a set of separators using an approach that is similar to the discrete contour computation algorithm of Tierny et al. [TVD08]. If a value of is associated with each vertex, we proceed by marking the vertices which are discrete minima as sources and then we add all their neighbors to a queue, , of vertices being visited. At each step, we “freeze” the vertex in with the smallest value and add all its unvisited neighboring vertices to . This is similar to the common implementation of Dijkstra’s algorithm, except that we are not computing distances but merely tracking an existing function. Importantly, at any time step, the vertices in form a separator (between frozen vertices and unvisited vertices) whose connected components can be used as local separators. By keeping track of the time step at which vertices are added and removed, we can easily output non-overlapping local separators, and the result is tantamount to a discrete Reeb graph.

Four such separator collections (and associated skeletons) are shown in Figure 11(a-d). The height functions employed are simply the vertex positions projected onto an axis and subsequently smoothed. Applying our packing algorithm, we can now compute a set of combined separators and the resulting skeleton shown in Figure 11e. In effect, our separator packing algorithm blends the four input Reeb graphs, and the result is a skeleton that appears to better capture the overall shape than the individual Reeb graphs. In particular, centeredness (iii) is improved.

Clearly, we could also have employed a number of other functions such as eigenfunctions of the graph Laplacian or geodesic distance functions as choices of

. We did experiment with all of these options, individually and in combination. However, in our experience, local separators are better at capturing fine details of the geometry.

Noisy Meshes
Figure 12: From left to right these images show: the input mesh, the LS skeleton, the MCF skeleton with , and with . The rows contain the Armadillo without noise (top row), the same model after one application of vertex displacement noise in the middle row, and two applications of random displacement (bottom row).

We compared LS to MCF in terms of response to noise. The vertices of the Armadillo were corrupted by adding a random vector sampled from a ball of radius equal to half the average edge length. This procedure was also applied a second time to measure the response to more extreme noise. The results are shown in Figure 

12. We note that the effect of noise on the LS method appears to mostly be spurious leaf nodes, which could be removed fairly easily, whereas MCF seems to loose rather than gain features. This is unsurprising since MCF is based on smoothing, and more smoothing is probably needed in order to obtain a skeleton from a noisy model.

Quadrilateral Meshes
Figure 13: The effect of increasing connectivity. The original quad mesh is shown left, (a) shows the separators for a graph constructed from a triangulation of the quad mesh, (b) shows separators for a graph constructed by connecting each vertex, , to , and (c) shows the corresponding result for .

If a pure quadrilateral mesh is provided as input to our method, the output is an identical quad mesh since each vertex will be a separator. Simply triangulating the quad mesh makes the algorithm behave as expected. However, we can also increase the connectivity in other ways than by triangulating. A simple procedure is to connect all vertices to the neighbors of their neighbors. More generally, we can define as the set of vertices at a distance of at most edges. Figure 13 shows the skeletons produced from a triangulation (a) and from a graph where each vertex, , has been connected to (b) and (c). Unsurprisingly, the separators become fewer and thicker as the connectivity increases.

3.3 Voxel Grids

Like a triangle mesh, a voxel grid can be construed as a graph. In this case, the positions of the graph vertices are simply the positions of the voxels in the 3D grid, but we remove vertices that correspond to background voxels and keep only those which correspond to foreground (or object). Each graph vertex should be connected to all the vertices which correspond to foreground vertices in its 26-connected neighborhood. If we think of a voxel as a small cube in a grid of cubes, 26-connectivity means that the voxel (and hence the graph vertex) is connected to all the voxels with which it shares a face, edge, or a vertex.

Figure 14: Volumetric skeletons generated from the Wood Sculpture. The original mesh (a), a graph containing 760 vertices created from a voxel grid with local separators shown in color (b). The skeleton produced from this graph (c), the skeleton produced from a volumetric graph consisting of 1800 vertices (d), 3540 vertices (e), 6166 vertices (f), 9830 vertices (g), 14723 vertices (h), and 20966 vertices (i).

We conducted two experiments using voxel grids. In the first experiment, we sampled the Wood Sculpture model on a voxel grid in order to produce a graph. We did this for seven voxel grids with longest side lengths ranging from 30 through 90. The results are shown in Figure 14. There is a clear variation in the skeletons, and comparing with Figure 8 it seems that slightly more details are captured in the mesh skeleton. This is to be expected since the voxel grids are derived from the mesh. Note how the local separators shown in Figure 14(b) are laminar rather than cycles.

In another experiment, we ran the code on X-ray CT data of liquid foam from the TomoBank repository [DGC+18]. This 3D image is of size and we create graph nodes for voxels above 0.37 corresponding to 11.3% of the voxels becoming graph nodes. The output is shown in Figure 15 (top two rows). We can easily compare our approach to image based approaches to skeletonization. The scikit-image (https://scikit-image.org) package for Python contains a function which, given a 3D binary image, returns a new 3D binary image containing a 1 voxel thick skeleton. This function works by iterative thinning according to a method by [LKC94], and it is extremely fast, producing a skeleton in roughly 55 milliseconds. However, the output is simply a binary image whereas our method estimates more precise vertex positions and their connectivity. A comparison is shown in Figure 15 (bottom row).

Figure 15: Skeleton from tomographic reconstruction of foam. Clockwise from the top left image, the first two rows show a volume rendering, the local separators, a detail of the local separators, and the reconstructed skeleton. The bottom row shows a comparison between morphological thinning (left) and our method.

3.4 Point Clouds

In some respects, point clouds are a more challenging type of data than meshes or voxel grids. Since our algorithm operates on graphs, we need to convert an input point cloud to a graph before proceeding. Figure 1 shows a simple example of a 2D point cloud where a graph has been constructed by connecting each point to all neighbors in a given radius, and then our method has been applied to the graph.

Of course, Figure 1 is a toy example. For real point clouds, we use the following pipeline:

  1. Construct a graph, by connecting each point to its at most k-nearest neighbors within a given radius.

  2. Connect each vertex, to vertices in for some (again within a certain radius).

  3. Simplify the graph by edge contraction.

We will briefly justify this pipeline. In order to separate features of the represented object (say two branches of a tree) we often need to use a radius in Step 1 that is too small to ensure that we obtain all edges needed to fulfill the reconstruction condition. However, as long as these edges can be reached by going from neighbor to neighbor, we can obtain them via the second step. Thus, it is possible to maintain a gap between features smaller than the greatest distance between vertices that should be connected. Often the resulting graph contains more vertices than needed to capture the skeletal features we seek, hence the third step. Note that while this pipeline is not trivial and depends on several parameters that must be determined, it is simpler than any method that are aware of for reconstructing a manifold triangle mesh for an object of unknown genus.

A LiDAR scan of a botanical tree consisting of 832943 points was provided by collaborators. Applying the pipeline above, we obtained a graph consisting of 48661 vertices. Once the LS method had been applied to this graph, the skeleton was garbed using convolution surfaces [BS91] producing the result shown in Figure 2.

Using the botanical tree and three of their provided example point clouds as test data, we compared our LS method to the L1-medial skeleton method (L1M) by Huang et al. [HWC+13]. The comparison was carried out using the executable provided by the authors. Table 2 summarizes the comparison. Note that we do not compare timings since the tests were carried out on different computers using different numbers of vertices and samples respectively, for LS and L1M. The results are shown in Figure 16. It is clear that the two methods result in skeletons which are qualitatively very different. L1M is not constrained by point connectivity. This means that it can sometimes capture a feature defined by several disjoint point clusters with a single chain of skeletal edges. This is evident in Figure 16 (d). Due to several gaps in the point cloud, the back and the belly of the Dinosaur are represented by different edge chains using our method, but captured with a single chain of edges using L1M. However, L1M does have a propensity for creating gaps in the skeleton, and this is very apparent in the case of the botanical tree where the method seems to invent some branches while missing other. Admittedly, L1M is governed by numerous parameters. We tried eight different settings on the tree model but were unable to obtain a better result.

LS L1M
Model Points vertices samples iter
Tree 48661 48661 48661 500
Dinosaur 25713 2329 25713 255
GLady 44230 859 5559 165
Yoga3 21903 2307 21903 199
Table 2: Comparison of the LS and L1M methods for skeletonization. This table contains input sizes and the number of iterations used by L1M.
Figure 16: (a) shows a comparison using the botanical tree point cloud. On the left, the L1M algorithm by Huang et al. was employed [HWC+13]. On the right, our local separator algorithm was used to skeletonize the graph produced from the points. (b,c,d) show a comparison between the method of Huang et al. and our method for three point clouds. In each case, the left image was computed using L1M, the middle image shows the input graph and the color coded separators found using the LS method, and the right image shows the final LS skeleton.

.

3.5 Experimental running time analysis.

To experimentally evaluate the performance of our algorithm, we ran it on subsamples of Wood Sculpture (see Figure 8). We ran two experiments: For one, we thinned out the mesh grid to obtain shapes of fewer vertices (all, half, quarter, etc, down to ). Secondly, we transformed the mesh to a voxel grid with a variable number of voxels.

In all experiments, we found that the time spent computing the separators by far dominates the running time (>95% of the time was spent on finding separators as opposed to packing and extraction), and that the proportion of total time spent on computing separators tends to 1 as the number of vertices grows.

Although the running time is indeed a function of two parameters, the number of vertices and the average separator size found in Algorithm 1, we run the experiment only on this one figure (of fixed lankiness), and analyse the running time simply as a function of . We expected a running time of at most for the mesh, and for the voxels, though hopefully less because of the sampling. As shown in Figure 17, we did indeed observe an asymptotically improved running time compared to the worst-case, and we did observe that voxels were asymptotically slower than meshes.

Figure 17: The running time for submeshes formed from Wood Sculpture, and their power regression curves.

A natural question, since the running time depends on the number of vertices sampled, is: how many vertices are sampled? Here, we were expecting asymptotically fewer points to be sampled from the voxel grid in comparison to the mesh. Indeed, this appears to be the case. Using power regression, we estimate a total sample of for an -vertex mesh, and for an -vertex voxel grid. Note, however, that these numbers are highly dependent on the lankiness of the specific shape in question.

4 Conclusions and Future Work

We have presented our results for a skeletonization algorithm based on the notion of local separators. Our method accepts as input a graph whose vertices map to spatial positions. While certain assumptions on connectivity must be met for a meaningful output to be produced, our method works on a wide range of inputs, and the resulting skeletons tend to resolve all important features.

A local separator can be seen as a skeletal atom, and a particular point of our method is that we find these atoms first and then assemble the skeleton. This is somewhat different from most methods based e.g. on clustering [JXC+13], contraction [HWC+13] or smoothing [TAO+12, ATC+08] where these two steps are coupled, meaning that we only know the atoms when we have found the skeleton (at least locally). In our approach, we require the atoms to be separators and find a super set of the needed atoms, i.e minimal local separators. We speculate that this is what allows the LS approach to resolve finer details in the shapes we tested than MCF [TAO+12]: if the skeletal atoms are a product of a process which terminates only when the skeletal structure has been computed, then atoms corresponding to small structures might have been incidentally removed when that point is reached. Of course, the advantage is only reaped because the LS algorithm is able to resolve small details in the first place. This is thanks in large part to the notion of separator minimality.

Unlike a number of approaches to curve skeletonization, notably [DS06], our method does not take the medial surface into account. However, recent methods compute curve skeletons which are attracted to the medial surface, and this would certainly be possible also with our method and might be worthwhile investigating. Likewise, smoothing is not inherent in our method. Arguably, this is an advantage since smoothness can also make the skeleton less precise, but both attraction to the medial surface and optional smoothing could be combined leading to a method that would flexibly allow the user to obtain skeletons with desired degrees of smoothness and proximity to the medial surface in the vein of [TAO+12] - at least for shapes where the medial surface is well defined.

In this work, we dealt with performance through parallelization and sampling, but the process of finding and packing local separators becomes very slow for large separators. The use of a dynamic spatial data structure [AMN+98] when searching the front, and a structure for dynamic connectivity of the front [HdT01], would reduce the search time in Algorithm 1 from linear to polylogarithmic in the number of vertices.

The performance of the sampling scheme could also be improved. In particular, we might not have to start from a vertex when computing every single separator. In fact, given an existing separator, the connected components of its neighbors are also separators in most cases. Thus, it is, in fact, quite possible to find separators by a search that starts from a single separator and propagates out in this way. However, our preliminary experiments indicate that sampling is still required, and it is not easier to find small features using this approach. Thus, more work is required to find a good sampling scheme that balances coverage with the use of the information in the separators already found.

An important feature of our algorithm is that it works for any input that is a graph or that can easily be transformed into a graph. As such, it may be used for skeletonization of shapes not only in and , but shapes in higher dimensions, or non-euclidean graphs. An open question for future work is to find such applications and test our algorithm on those.

Acknowledgements

We are indebted to Rasmus Reinhold Paulsen, Patrick Møller Jensen, Tim Felle Olsen, and Niels Jeppesen for providing several types of data and testing the method on that data. We are grateful to Ebba Dellwik for providing the LiDAR scan of a botanical tree and to Ida Bukh Villesen for her work on a method for constructing a graph from this point set.

References

  • [AMN+98] S. Arya, D. M. Mount, N. S. Netanyahu, R. Silverman, and A. Y. Wu (1998-11) An optimal algorithm for approximate nearest neighbor searching fixed dimensions. J. ACM 45 (6), pp. 891–923. External Links: Document, ISSN 0004-5411 Cited by: §4.
  • [ATC+08] O. K. Au, C. Tai, H. Chu, D. Cohen-Or, and T. Lee (2008) Skeleton extraction by mesh contraction. ACM transactions on graphics (TOG) 27 (3), pp. 1–10. Cited by: §1.2, §4.
  • [BGS+08] S. Biasotti, D. Giorgi, M. Spagnuolo, and B. Falcidieno (2008) Reeb graphs for shape analysis and applications. Theoretical computer science 392 (1-3), pp. 5–22. Cited by: §1.2.
  • [BS91] J. Bloomenthal and K. Shoemake (1991) Convolution surfaces. ACM SIGGRAPH Computer Graphics 25 (4), pp. 251–256. Cited by: §1, §3.2, §3.4.
  • [BLU67] H. Blum (1967) A transformation for extracting new descriptors of shape. Models for the perception of speech and visual form 19 (5), pp. 362–380. Cited by: §1.2.
  • [BRO97] A. Broder (1997) On the resemblance and containment of documents. In Proceedings of the Compression and Complexity of Sequences 1997, SEQUENCES ’97, USA, pp. 21. External Links: ISBN 0818681322 Cited by: §2.2.
  • [CHV79] V. Chvatal (1979) A greedy heuristic for the set-covering problem. Mathematics of operations research 4 (3), pp. 233–235. Cited by: §2.2.
  • [CSM07] N. D. Cornea, D. Silver, and P. Min (2007) Curve-skeleton properties, applications, and algorithms. IEEE Transactions on Visualization & Computer Graphics (3), pp. 530–548. Cited by: §1.2.
  • [DGC+18] F. De Carlo, D. Gürsoy, D. J. Ching, K. J. Batenburg, W. Ludwig, L. Mancini, F. Marone, R. Mokso, D. M. Pelt, J. Sijbers, et al. (2018) TomoBank: a tomographic data repository for computational x-ray science. Measurement Science and Technology 29 (3), pp. 034004. Cited by: §3.3.
  • [DS06] T. K. Dey and J. Sun (2006) Defining and computing curve-skeletons with medial geodesic function. In Symposium on geometry processing, Vol. 6, pp. 143–152. Cited by: §1.2, §4.
  • [DFW13] T. K. Dey, F. Fan, and Y. Wang (2013) An efficient computation of handle and tunnel loops via reeb graphs. ACM Trans. Graph. 32 (4), pp. 32:1–32:10. External Links: Link, Document Cited by: §1.2.
  • [GZL+19] L. Gao, D. Zhang, N. Li, and L. Chen (2019) Force field driven skeleton extraction method for point cloud trees. Earth Science Informatics 12 (2), pp. 161–171. Cited by: §1.2.
  • [HSS06] E. Hazan, S. Safra, and O. Schwartz (2006-05) On the complexity of approximating k-set packing. Comput. Complex. 15 (1), pp. 20–39. External Links: Document, ISSN 1016-3328 Cited by: §2.2.
  • [HdT01] J. Holm, K. de Lichtenberg, and M. Thorup (2001) Poly-logarithmic deterministic fully-dynamic algorithms for connectivity, minimum spanning tree, 2-edge, and biconnectivity. J. ACM 48 (4), pp. 723–760. External Links: Document Cited by: §4.
  • [HWC+13] H. Huang, S. Wu, D. Cohen-Or, M. Gong, H. Zhang, G. Li, and B. Chen (2013) L1-medial skeleton of point cloud.. ACM Trans. Graph. 32 (4), pp. 65–1. Cited by: §1.2, Figure 16, §3.1, §3.4, §4.
  • [HS89] C.A.J. Hurkens and A. Schrijver (1989) On the size of systems of sets every t of which have an sdr, with an application to the worst-case ratio of heuristics for packing problems. SIAM Journal on Discrete Mathematics 2 (1), pp. 68–72 (English). External Links: Document, ISSN 0895-4801 Cited by: §2.2.
  • [JK34] V. Jarník and M. Kossler (1934) O minimálních grafech obsahujících n daných bodu. Časopis Pěst. Mat. 63, pp. 223–235. Cited by: §1.2.
  • [JXC+13] W. Jiang, K. Xu, Z. Cheng, R. R. Martin, and G. Dang (2013) Curve skeleton extraction by coupled graph contraction and surface clustering. Graphical Models 75 (3), pp. 137–148. Cited by: §1.2, §4.
  • [KOR13] D. Kordalewski (2013) New greedy heuristics for set cover and set packing. arXiv preprint arXiv:1305.3584. Cited by: §2.2.
  • [KN01] B. Korte and J. Nešetřil (2001)

    Vojtěch jarník’s work in combinatorial optimization

    .
    Discrete Mathematics 235 (1), pp. 1 – 17. Note: Chech and Slovak 3 External Links: Document, ISSN 0012-365X Cited by: §1.2.
  • [LKC94] T. Lee, R. L. Kashyap, and C. Chu (1994) Building skeleton models via 3-d medial surface axis thinning algorithms. CVGIP: Graphical Models and Image Processing 56 (6), pp. 462–478. Cited by: §1.2, §3.3.
  • [LYO+10] Y. Livny, F. Yan, M. Olson, B. Chen, H. Zhang, and J. El-Sana (2010) Automatic reconstruction of tree skeletal structures from point clouds. ACM Transactions on Graphics (TOG) 29 (6), pp. 151. Cited by: §1.2.
  • [PRE12] C. Preuksakarn (2012) Reconstructing plant architecture from 3d laser scanner data. modeling and simulation. Ph.D. Thesis, Université Montpellier II - Sciences et Techniques du Languedoc. Cited by: §1.2.
  • [QLZ+18] E. Quigley, W. Lin, Y. Zhu, and R. Fedkiw (2018) Three dimensional reconstruction of botanical trees with simulatable geometry. CoRR abs/1812.08849. External Links: 1812.08849, Link Cited by: §1.2.
  • [RAV+19] D. Rebain, B. Angles, J. Valentin, N. Vining, J. Peethambaran, S. Izadi, and A. Tagliasacchi (2019) LSMAT least squares medial axis transform. In Computer Graphics Forum, Vol. 38, pp. 5–18. Cited by: §1.2.
  • [SBd16] P. K. Saha, G. Borgefors, and G. S. di Baja (2016) A survey on skeletonization algorithms and their applications. Pattern recognition letters 76, pp. 3–12. Cited by: §1.2.
  • [SBd17] P. K. Saha, G. Borgefors, and G. S. di Baja (2017) Skeletonization: theory, methods and applications. Academic Press. Cited by: §1.2.
  • [SS12] P. Sanders and C. Schulz (2012) High quality graph partitioning.. Graph Partitioning and Graph Clustering 588 (1), pp. 1–17. Cited by: §2.3.
  • [SER83] J. Serra (1983) Image analysis and mathematical morphology. Academic Press, Inc.. Cited by: §1.2.
  • [SP08] K. Siddiqi and S. Pizer (2008) Medial representations: mathematics, algorithms and applications. Vol. 37, Springer Science & Business Media. Cited by: §1.2.
  • [TAO+12] A. Tagliasacchi, I. Alhashim, M. Olson, and H. Zhang (2012) Mean curvature skeletons. In Computer Graphics Forum, Vol. 31, pp. 1735–1744. Cited by: §1.2, Figure 8, §3.1, §3.2, Table 1, §4, §4.
  • [TDS+16] A. Tagliasacchi, T. Delame, M. Spagnuolo, N. Amenta, and A. Telea (2016) 3D skeletons: a state-of-the-art report. Computer Graphics Forum 35 (2), pp. 573–597. External Links: Document Cited by: §1.2.
  • [TZC09] A. Tagliasacchi, H. Zhang, and D. Cohen-Or (2009-07) Curve skeleton extraction from incomplete point cloud. ACM Trans. Graph. 28 (3), pp. 71:1–71:9. External Links: Document, ISSN 0730-0301 Cited by: §1.2.
  • [TVD08] J. Tierny, J. Vandeborre, and M. Daoudi (2008) Enhancing 3d mesh topological skeletons with discrete contour constrictions. The Visual Computer 24 (3), pp. 155–172. Cited by: §1.2, §3.2.
  • [XZK+19] Z. Xu, Y. Zhou, E. Kalogerakis, and K. Singh (2019) Predicting animation skeletons for 3d articulated models via volumetric nets. In 2019 International Conference on 3D Vision (3DV), pp. 298–307. Cited by: §1.2.
  • [YSC+16] Y. Yan, K. Sykes, E. W. Chambers, D. Letscher, and T. Ju (2016) Erosion thickness on medial axes of 3d shapes. ACM Trans. Graph. 35 (4), pp. 38:1–38:12. External Links: Link, Document Cited by: §1.2.
  • [ZYH+15] Y. Zhou, K. Yin, H. Huang, H. Zhang, M. Gong, and D. Cohen-Or (2015) Generalized cylinder decomposition.. ACM Trans. Graph. 34 (6), pp. 171–1. Cited by: §1.2.