Log In Sign Up

Epsilon-shapes: characterizing, detecting and thickening thin features in geometric models

by   Daniela Cabiddu, et al.

We focus on the analysis of planar shapes and solid objects having thin features and propose a new mathematical model to characterize them. Based on our model, that we call an epsilon-shape, we show how thin parts can be effectively and efficiently detected by an algorithm, and propose a novel approach to thicken these features while leaving all the other parts of the shape unchanged. When compared with state-of-the-art solutions, our proposal proves to be particularly flexible, efficient and stable, and does not require any unintuitive parameter to fine-tune the process. Furthermore, our method is able to detect thin features both in the object and in its complement, thus providing a useful tool to detect thin cavities and narrow channels. We discuss the importance of this kind of analysis in the design of robust structures and in the creation of geometry to be fabricated with modern additive manufacturing technology.


page 8

page 9

page 11


The Shape Part Slot Machine: Contact-based Reasoning for Generating 3D Shapes from Parts

We present the Shape Part Slot Machine, a new method for assembling nove...

Encoding of direct 4D printing of isotropic single-material system for double-curvature and multimodal morphing

The ability to morph flat sheets into complex 3D shapes is extremely use...

CorNet: Generic 3D Corners for 6D Pose Estimation of New Objects without Retraining

We present a novel approach to the detection and 3D pose estimation of o...

RISA-Net: Rotation-Invariant Structure-Aware Network for Fine-Grained 3D Shape Retrieval

Fine-grained 3D shape retrieval aims to retrieve 3D shapes similar to a ...

AttentionMask: Attentive, Efficient Object Proposal Generation Focusing on Small Objects

We propose a novel approach for class-agnostic object proposal generatio...

Shapes In A Box – Disassembling 3D objects for efficient packing and fabrication

Modern 3D printing technologies and the upcoming mass-customization para...

1 Introduction

Thickness information is important in a number of shape-related applications, including surface segmentation, shape retrieval and simulation. By aggregating points with a similar local thickness, Shapira and colleagues Shapira et al. (2008) developed a pose-invariant shape partitioning algorithm, whereas thickness was used in histogram form as a shape descriptor for retrieval purposes in Schmitt et al. (2015). While considering shape thickness in general, a particular relevance is assumed by thin parts such as thin walls or wire-like features. These parts represent weak locations in physical structures, can be an obstacle for thermal dissipation, and can be subject to unwanted melting when traversed by intense electric currents. Furthermore, thin features are a serious issue in many manufacturing technologies. In plastic injection molding, for example, thin regions can restrict the flow of molten plastic Osswald et al. (2008), causing the mold to be only partially filled. In modern additive manufacturing technologies, too thin features may lead to small layers of the surface being peeled, thin shape parts being fully removed, and shape break-up in several parts due to narrow connections Telea and Jalba (2011).

The intuitive concept of “shape thickness” has been formalized in several ways in the literature, and diverse algorithms exist for its actual evaluation Shapira et al. (2008) Kovacic et al. (2010) Miklos et al. (2010). However, existing algorithms require a trade-off between accuracy and efficiency that can be hard to establish for demanding applications such as industrial 3D printing.

In this paper we introduce the concept of “-shape” which is a mathematical model for objects having no features thinner than a threshold value . Based on this concept, we describe an algorithm to exactly and efficiently detect thin features. Differently from existing methods, our approach focuses on thin features only, meaning that parts of the geometry which are thick enough are not subject to any complex analysis: this is the key to achieve efficiency without sacrificing the precision. Furthermore, our formulation allows to detect thin portions in both the model and its complementary part. Note that this characteristic is particularly important when analyzing 3D mechanisms Hergel and Lefebvre (2015): if parts which are supposed to move independently are separated by a too small space, they risk to be glued when printed.

To demonstrate the usefulness of our analysis in practice, we describe a novel algorithm to thicken those parts of the shape which are thinner than a given threshold value (e.g. the layer thickness for 3D printing applications) while leaving all the other parts unaltered.

2 Related Works

The medial axis of a shape is the locus of points having more than one closest point on the shape’s boundary. The medial axis may be enriched by considering each of its points endowed with its minimum distance from the shape’s boundary Choi et al. (1997), leading to the so-called Medial Axis Transform (MAT). From a purely theoretical point of view, the MAT should be the reference tool to compute thickness information.

Unfortunately, as observed in Shapira et al. (2008), computing the medial axis and the MAT of a surface mesh can be an expensive process, and the medial axis itself is hard to manipulate Amenta et al. (2001). Furthermore, algorithms based on the MAT are too sensitive and tend to confuse surface noise with small features unless approximations are employed Miklos et al. (2010) Li et al. (2015). This motivated the emergence of the numerous alternative methods described in the remainder of this section.

2.1 Voxel-based methods

The analysis of medical 3D images has been an early calling application for thickness analysis. For Hildebrand and Rüegsegger Hildebrand and Rüegsegger (1997), the thickness related to a surface point (i.e. a skin voxel) is defined as the radius of the largest sphere centered inside the object and touching that point. This definition was turned into an algorithm in Dougherty and Kunzelmann (2007) where, after having computed the discrete medial axis, a distance transform from it is employed to associate a thickness value to all the skin voxels. In a more efficient approach based on PDEs Yezzi and Prince (2003), Yezzy and colleagues compute thickness as the minimum-length surface-to-surface path between pairs of surface points. Telea and Jalba Telea and Jalba (2011) observe that the extension of Yezzi’s method to higher-genus models is not evident, and propose an alternative algorithm based on the top-hat transform from multi-scale morphology Maragos (1996).

All these methods are suitable when the input comes in voxel form, but for objects represented in vector form (e.g. triangle meshes) an initial “voxelization” step is required which necessarily introduces a distortion. In principle, for 3D printing applications it is sufficient to keep this distortion below the printer’s resolution

Telea and Jalba (2011). Even if this solution may work for low-resolution devices such as low-cost FDM-based printer’s, it is too demanding for industrial printing where the layer thickness can be as small as 14 microns Str (????), which means that a centimeters cube would require more than 364 billion voxels.

2.2 Mesh-based methods

In Lambourne et al. (2005) the concept of thickness at a point is defined based on spheres centered within the object and touching that point tangentially. Differently from the medial axis, these spheres may be not completely contained in the object. Though this method can be applied both on NURBS models and on piecewise-linear meshes, it has been shown to work well on relatively simple shapes only.

The Shape Diameter Function (SDF) was introduced in Shapira et al. (2008)

as a tool to perform mesh segmentation, and today is one of the most diffused methods to evaluate the thickness at a mesh point. The idea is to take a point on the surface and (1) compute the surface normal at that point, (2) cast a ray in the opposite direction, and (3) measure the distance of the first intersection of this ray with the mesh. Since this basic approach is too sensitive to small shape features and noise, Shapira and colleagues propose to use a number of casting directions within a cone around the estimated surface normal, and to keep a statistically relevant average of all the distances. Unfortunately this workaround may lead to completely unexpected results exactly in those cases that we consider to be critical. Imagine a tubular shape with a very narrow bottleneck on one side (see Fig.


). In this case only a minority of the rays hits the bottleneck tip, and thus the resulting average would be far from the expectation. This is even amplified in practice because, in an attempt to make the evaluation more accurate, the SDF method filters those diameters which are too far from the median value, which are considered to be outliers. Note that this is a serious issue in all those applications where potential structural weaknesses must be detected (e.g. industrial 3D printing). Though some improvements are possible to make the SDF calculation faster

Kovacic et al. (2010) and less sensitive to noise Rolland-Nevière et al. (2013), this intrinsic limitation remains.

Figure 1: SDF may return a result too far from the expectation. LABEL: Example of a shape with a narrow bottleneck in the middle. LABEL: Only a minority of casted rays touches the bottleneck tip.

2.3 Thickening

Automatic detection of thin parts is an important tool for a designer who is producing a shape model. If too thin parts are actually detected, the designer may take countermeasures such as thickening those parts. However, in contexts where the user is not expert enough (i.e. home users of consumer 3D printers), automatic thickening algorithms would help a lot. This assumption motivated the work by Wang and Chen Wang and Chen (2013) in the context of solid fabrication: models created by unexperienced designers may represent thin features by zero-thickness triangle strips, and Wang and Chen (2013) presents an algorithm to thicken these features so that they become solid and printable while keeping the outer surface geometry unaltered.

A generalization of morphological operators is used in Calderon and Boubekeur (2014), where thickening can be obtained by applying an overall dilation to the input.

The aforementioned methods do not adjust the amount of thickening depending on the local needs (i.e. the entire model is uniformly thickened). Conversely, based on an approximate medial axis, Stava et al. (2012) cleverly analyses the geometry to predict where the printed prototype undergoes excessive stress due to manipulation and, besides adaptively thickening the weak parts, the method also adds little supporting structures to improve the overall object robustness. Note that while we focus on geometry only, in Stava et al. (2012) the objective is to meet physical requirements, and some of the thin features may remain if they do not represent a weak part according to mechanical criteria. A comprehensive worst-case simulation is performed in Zhou et al. (2013), but in this case the algorithm proposed is limited to the analysis and does not include a thickening phase.

2.4 Summary of contributions

The method presented in this article strives to overcome the limitations of the existing approaches discussed so far. In particular, we provide an original well-defined mathematical model to represent thickness information in any dimension and, based on this model, define an efficient algorithm to detect thin features in 2D and 3D objects represented in piecewise-linear form. We show how the method can be used to detect thin features in both the object and its complementary part, and demonstrate how to speed up the calculation by limiting the complex analysis within thin areas, which are the subject of our investigation. Finally, we show how thickness information can be exploited to adaptively thickening the object only where necessary, while leaving all the other parts of the shape unchanged.

3 Epsilon shapes

Let be a compact -manifold with boundary, embedded in . Let be a point lying on the boundary of and let be a real non-negative number (). Let be a closed -ball centered in and having radius equal to .

We define two operators that we call -sum and -difference.

An -sum is the result of the union of and

while an -difference is the result of the difference between and the interior of

-sums and -differences are grouped under a common definition of -modifications. An -modification of is topologically neutral if the interior of its result is homeomorphic with the interior of . It is worth noticing that the boundary of a topologically neutral -modification may not be an -manifold.

A shape is an -shape if all its -modifications are neutral, for each . If is a -shape, then is also an -shape for each . Let be an -shape. is maximal if no exists so that is also an -shape.

Although any compact -manifold is an -shape for some , it is worth observing that there are some such manifolds for which this is true only for . An example in 2D is represented by polygons with acute angles, where any arbitrarily small disk induces a non-neutral -difference if its center is sufficiently close to the acute angle (Figure 2). However, even in these cases, we can analyze the boundary in a point-wise manner and construct what we call an -map.

Figure 2: Acute angles. Any arbitrarily small disk induces a non-neutral -difference if its center is sufficiently close to an acute angle.

We say that an -sum is strongly neutral if all the with are topologically neutral. An analogous concept is defined for -differences. A positive -map is a function that maps each point of the boundary of to the maximum value of for which the -sum at is strongly neutral. A negative -map is defined as while replacing -sums with -differences. An -map is defined as . The minimum of is the value for which is a maximal -shape. We extend this concept by saying that, if is the minimum of , then is a positive -shape. If is the minimum of , then is a negative -shape.

A positive -map represents our formalization of the intuitive concept of shape thickness, whereas a negative -map describes the thickness of the shape’s complement. The following Sections 4 and 5 describe an algorithm to compute these maps in 2D and 3D respectively.

4 Planar shape analysis

In our scenario the input is a single polygon , possibly non-simply connected. Hence the boundary of , , is a 1-manifold made of a number of edges connected to each other at vertices. In a straightforward approach, the value of may be associated to each vertex of . However, we observe that a so-defined -map is not sufficient to represent all the thin features, since local minima of may correspond to points lying in the middle of some edges (see Figure 3). To enable a comprehensive and conservative representation of all the thin features, the aforementioned -map must be encoded while considering internal edge points as well. If a local minimum happens to be on a point in the middle of an edge, we split that edge at so that is represented.

Our algorithm to compute is based on a region growing approach. Intuitively, we imagine to have an infinitely small disk centered at a point on , and imagine to grow its radius in a continuous manner. Initially, the portion of contained in the disk is made of a single open line, and we keep growing the radius as long as the topology of this restricted does not change. It might happen, for example, that the line splits into separate components, or that it becomes closed (e.g. when the disk becomes large enough to contain the whole ). The maximum value reached by the radius is the value of that we eventually associate to . To turn this intuitive idea into a working algorithm, we first create a constrained triangulation of ’s convex hull, where all the edges of are also edges of the triangulation. Then, we discretize the growing process by calculating relevant “events” that may occur either when the growing disk hits an edge of the triangulation, or when it hits one of its vertices. This approach is close in spirit to Chen and Han’s algorithm Chen and Han (1990) to calculate geodesic distances on polyhedra, where the topology of the evolving front is guaranteed not to change between two subsequent events.

To simplify the exposition, we first describe how to compute at a single vertex (Section 4.1), and then show how to extend the method to find possible local minima of along an edge (Section 4.2). Finally, we describe an optimized procedure to compute the value of on the whole polygon while possibly splitting its edges at local minima (Section 4.3).

Figure 3: Example of a local minimum in the middle of an edge.

4.1 Thickness for a Single Vertex

(a) Convex hull and triangulation.
(b) at the first iteration.
(c) The disk at the first iteration.
(d) and the disk at the second iteration
(e) and the disk at the third (and last) iteration
Figure 4: Computing for a single vertex. LABEL: Input polygon. Both the convex hull and the triangulation are shown. LABEL: At the first iteration, includes all the triangles incident at . LABEL: From the boundary of , the point closest to is selected. The subset of polygon elements within the disk is a single connected component (i.e. the open polyline . LABEL: is updated by adding ’s top-most incident triangle, and the closest point is computed. Again, the subpart of the polygon within the disk is a single open polyline. LABEL: grows, and the closest point is computed. At this iteration, the disk includes two disconnected components (i.e. the open polyline and the point ). Thus, a topology change occurred, is computed as the distance from to , and the algorithm terminates.

In the remainder, denotes the constrained triangulaton of , a mesh edge is an edge of , whereas a polygon edge is an edge of . So, a polygon edge is also a mesh edge but the opposite may not be true. Analogously, we refer to mesh vertices and polygon vertices, which is useful because the triangulation may (or may not) have additional vertices that do not belong to the original . A generic polygon element indicates either a polygon edge or a polygon vertex.

An example of this algorithm is shown in Figure 4. Let be a vertex of . Let be our growing region which is initially empty, and be the boundary of this region. Also, let bet the portion of made of the two polygon edges incident at . In the first iteration, includes all the triangles incident at (Figure 4(b)). Iteratively, the algorithm computes the point on which is closest to . At any iteration there might be multiple points at the same distance from . In this case the algorithm randomly selects one of them, and all the others are detected in the subsequent iterations. Depending on the nature of the selected point , the algorithm may either grow the region or terminate by associating to , where is the Euclidian distance between and .

Specifically, the closest point may be a vertex of (Figures 4(c) and Figure 4(e)), or it may be in the middle of one of its edges (Figure 4(d)). Notice that is not necessarily a polygon element. Imagine to build a disk centered in and whose radius equals the distance from to . The algorithm terminates if is what we call the antipodean of , that is, is on and the topology of restricted to the disk is no longer a single open line due to the inclusion of . For this to happen, one of the following conditions must hold:

  • is a point in the middle of a polygon edge;

  • is a polygon vertex and its two incident polygon edges are either both internal or both external wrt the disk. Herewith, one such edge is external if is its closest point to , whereas it is internal in the other cases.

If none of these conditions holds, the algorithm grows by adding all the triangles whose boundary contains . Thus, if is a vertex, grows on all the triangles which are incident at but are not in yet. If is in the middle of an edge , includes the triangle incident at which is not in yet.

4.2 Minimum Thickness for a Single Edge

The region growing approach described in Section 4.1 can be extended to compute the minimum along a single edge . In this case, the minimum thickness may be either at one of the two endpoints or inbetween. Intuitively, an arbitrarily small disk is centered on and is grown iteratively by exploiting to discretize the process. The center of the disk is not fixed, but is moved along at each iteration. At the last iteration, the disk radius represents the minimum value of along , while the position of the disk center shows where the local minimum lies.

Figure 5:

Two successive moments of the disk growth process while computing the minimum

along an edge (red colored). LABEL: The smallest disk centered on and tangent to vertex . LABEL: The smallest disk centered on and tangent to edge .
(a) Convex hull and triangulation.
(b) at the first iteration.
(c) The disk at the first iteration.
(d) and the disk at the second iteration.
(e) and the disk at the third iteration.
Figure 6: Computing the minimum along a single edge. LABEL: The convex hull and the triangulation. LABEL: The initial . LABEL: Point is not the antipodean of . grows by including all the triangles incident at . LABEL: Second iteration. LABEL: Third iteration.

Before starting the region growing, we perform an acute angle test: let be the polygon edge that shares with . If the angle formed by and at is acute, we assign a zero value of to . Similarly, we measure the angle at the other endpoint and set to zero if such an angle is acute. If either of the endpoints have been set to zero, the algorithm terminates. Indeed, in this case the minimum for the edge has already been found.

Otherwise, the algorithm initializes a region with all the triangles that share at least a vertex with . Let be the union of the two polygon edges different than that are incident at one of the endpoints of . At each iteration, the point on which is closest to is computed, and the corresponding point on which is closest to is determined. If is the “antipodean” of the algorithm terminates, otherwise it proceeds with the region growing as described in Section 4.1. Notice that in this description we have implicitly extended the definition of antipodean of a vertex to any point on the boundary of : now, is not necessarily a vertex, and this fact has a subtle though major impact on our process. Indeed, the algorithm should terminate when the disk becomes tangent to a polygon edge in the middle, or in any other case when within the disk is no longer a single open line. But unfortunately, this is not sufficient. Consider the disk in Figure 5(b): such a disk is tangent to the polygon edge in one of its endpoints and no termination condition holds. However, any arbitrarily small displacement of the disk along while possibly growing the disk itself would cause an event which is a stop condition of the algorithm (i.e. the disk would become tangent to in the middle). To turn this observation into practice and guarantee to detect these events, we say that a point on a polygon edge of is the antipodean of a point in the middle of an edge if the segment is orthogonal to . Hence, our region growing algorithm terminates either if is a polygon edge endpoint that satisfies this orthogonality condition, or if it is a polygon vertex satisfying the conditions given in Section 4.1.

4.3 Global thin feature detection

A comprehensive -map can be built by splitting all the edges at local minima and by associating the value of to each vertex , based on the procedures described in Sections 4.2 and 4.1.

Specifically, if our procedure detects a local minimum in the middle of an edge, that edge is split, the triangulation is updated accordingly, and the algorithm is run again to compute the local minima along the two resulting sub-edges. This procedure is guaranteed to converge because is piecewise-linear and must necessarily have a finite number of local minima. Particular/degenerate cases such as, e.g. parallel edges, are avoided through Simulation of Simplicity Edelsbrunner and Mücke (1990).

Figure 7: Computing the minima along any polygon edge is not sufficient. In this example, and are the minima along the edge and respectively. Recursively, the generated sub-edges , , and are analyzed. is the minimum along the first two sub-edges. is the minimum along the other two sub-edges. The value of at vertex is unknown, but it might be less than the given threshold. Thus, it is necessary to compute .

One might argue that computing at edges only is sufficient because any vertex is the end-point of some edges, and therefore its value of would be computed by the procedure in Section 4.2. Unfortunately this is not true for all the vertices, and a counter-example is shown in Figure 7. Thus, the algorithm described in Section 4.1 must be necessarily run to compute at any polygon vertex which is not a local minimum for any edge. Only after this step the -map is guaranteed to be correct at all the vertices.

We observe that many applications need to detect and possibly process thin features only, while thickness information on other (thick enough) parts of the shape is not used (Section 6). In these cases our algorithm can be significantly accelerated by specifying a threshold thickness beyond which a feature is no longer considered to be thin, and by stopping the region growing as soon as the radius exceeds such a threshold value. Hence, in this case the resulting -map exactly represents the thickness at any relevant point bounding a thin feature, whereas a generic thick attribute may replace the value of for all the other boundary points of the input shape.

5 Solid Object Analysis

In 3D we analyze a polyhedron bounded by a 2-manifold triangle mesh , and use a constrained tetrahedrization of ’s convex hull to discretize the ball-growing process.

We observe that the minimum of the -map within a triangle can be either on its border or in the middle of its internal area. Thus, before computing at vertices and possibly splitting edges as we do in 2D, in 3D we may need to split triangles as well to represent some of the local minima.

Vertices, edges and triangles are qualified as polyhedral if they belong to , whereas they are mesh elements if they belong to . Thus, similarly to the 2D case, a polyhedral element is also a mesh element, but not necessarily vice-versa.

5.1 Thickness at a vertex

We proceed with the region growing as we do for the 2D case, with the difference that in this case is a tetrahedral mesh and its boundary is a triangle mesh. Thus, is the union of all the polyhedral triangles incident at , is initialized with the set of all the tetrahedra incident at , and the algorithm terminates if the closest point on is the antipodean of , that is, when the topology on restricted to the ball is no longer a single disk. To define the conditions that characterize an antipodean point, we qualify a point on based on its incident elements as follows.

If is a polyhedral triangle having on its boundary (either along an edge or on one of its three vertices), may be either completely out of the ball (i.e. its minimum distance from is exactly the ball’s radius), or it may be partly or completely contained in it (i.e. its minimum distance from is smaller than the ball’s radius). In the former case we say that is external, whereas in the latter case we say that is an internal triangle wrt the ball. We use an analogous terminology to characterize an edge incident at wrt to the ball.

Having said that, is the antipodean of if one of the following conditions holds:

  • is in the middle of a polyhedral triangle;

  • is in the middle of a polyhedral edge whose two incident polyhedral triangles are either both external or both internal;

  • is a polyhedral vertex whose incident polyhedral edges are in radial order around and either:

    all the ’s are internal or

    all the ’s are external or

    there are more than two switches in the ordered chain of the ’s (internal/external or vice-versa, including the possible switch between and ).

Thus, at the end of each iteration, we check whether satisfies one of the aforementioned conditions. If so, the algorithm terminates. Otherwise, we grow the region on all the tetrahedra having on the boundary and not being already part of R(x).

5.2 Minimum thickness along an edge

During the analysis of a polyhedral edge we consider the possibility to move the ball’s center along the edge. The position of the ball’s center at the last iteration represents the minimum of along the edge, while the radius indicates its actual value. Note that the minimum may correspond to an edge endpoint or may lie in the middle.

Before starting the iterative region growing, we check for acute configurations as follows. Let and be the two polyhedral triangles incident at . We consider all the edges which are incident at but are not edges of either or . If any such edge forms an acute angle with , is set to zero, otherwise we consider all the triangles incident at but not incident at . If the projection of on the plane of any such triangle intersects the interior of the triangle itself, is set to zero. Analogous checks are performed on . If either or is set to zero due to these checks, the algorithm terminates.

Otherwise, a region is initialized with all the tetrahedra that share at least a vertex with , and the set is made of all the polyhedral triangles that share at least a vertex with . At each iteration, we calculate the point on which is closest to , and determine the corresponding point on which is closest to . If is the ”antipodean” of the algorithm terminates, otherwise it proceeds with the region growing as described in Section 4.1.

Similarly to the 2D case, the definition of antipodean was implicitly extended and a clarification is necessary to avoid missing borderline cases. Imagine a ball centered on and tangent to a polyhedral triangle. The tangency point may be either in the interior of the triangle or on its boundary. In the former case the algorithm terminates, whereas in the latter the region growing would take place if we consider only the conditions given in Section 5.1. Nevertheless, if we apply an arbitrarily small translation of the ball center along while possibly growing its radius, the ball would become tangent to the same triangle in the middle (i.e. the algorithm should terminate). Once again, to cope with these cases we extend the definition of antipodean point, and we say that a point on a polyhedral element of is the antipodean of a point in the middle of an edge if the segment is orthogonal to . Note that may be either an edge or a triangle.

Hence, our region growing algorithm terminates if is on a polyhedral element that satisfies this orthogonality condition, or if it is a polyhedral vertex satisfying the conditions given in Section 4.1.

5.3 Minimum thickness on a triangle

A similar approach can be exploited to compute the minimum on a single polyhedral triangle . The minimum may correspond to a triangle vertex, or lie in the middle of a triangle edge, or be a point in the middle of the triangle itself. Thus, the ball’s growth process considers the possibility to move the center on the whole .

As we do for edges, before starting the iterative region growing we check for acute configurations as follows. Let , and be the three polyhedral triangles adjacent to . We consider all the polyhedral edges which are incident at but are not edges of either , or . If the projection of any such edge on the plane of intersects the interior of , is set to zero. Analogous checks are performed on and . Furthermore, we consider each polyhedral triangle that shares at least a vertex with , and if the angle formed by the normal at and the normal at is obtuse, the value of for all their shared vertices is set to zero. If the value of for at least one of the three vertices is set to zero due to these checks, the algorithm terminates.

Otherwise, the initial region includes all the tetrahedra which are incident to at least one vertex of , and the set is made of all the polyhedral triangles that share at least a vertex with . At each iteration, we calculate the point on which is closest to , and determine the corresponding point on which is closest to . The same arguments and procedure described in Section 5.2 apply here.

5.4 Global thin feature detection

The algorithm to compute the overall -map is similar to the 2D version described in Section 4.3. We first analyze each polyhedral triangle (Section 5.3), possibly split it at its local minimum, and re-iterate the analysis on the resulting sub-triangles. We observe that these sub-triangles can be either three or just two (the latter case occurs if the minimum is on one of the three edges). When all the triangles are processed, we analyze each polyhedral edge (Section 5.2), possibly split it at its local minimum, and re-iterate on the two resulting sub-edges. The tetrahedization is updated upon each split operation performed on . Finally, we process all the polyhedral vertices (Section 5.1).

Note that in the 2D case we might have assumed that all the vertices are polygonal vertices. Indeed, a planar polygon can always be triangulated. A corresponding statement cannot be done for the 3D case due to the existence of polyhedra that do not admit a tetrahedrization. In these cases, a constrained tetrahedrization can be calculated only if a number of so-called Steiner points are added to the set of input vertices.

6 Thickening

When thickness information is available, the overall geometry can be adaptively changed in an attempt to replace thin features with thicker structures. For some applications a minimum thickness may be well defined (e.g. the layer thickness for 3D printing), whereas for some others this value can be empirically set by the user based on his/her experience. In any case, in this section we assume that such a threshold value is available and exploit it, in combination with thickness information, to modify the shape locally. While doing this operation, we strive to modify the geometry only where necessary, and no more than necessary.

Our thickening problem can be formulated as follows: given a shape and a threshold thickness , we wish to find the shape which is most similar to while being an -shape. Clearly, if has no feature thinner than , then must coincide with

. Though an exact solution to this problem appears to be extremely complicated, herewith we present a heuristic approach that proved to be both efficient and effective in all of our experiments. The basic idea is the following: if the value

of the -map at a point on is less than the threshold , we consider both and its antipodean , and move both the points away from each other as long as their distance becomes equal to . Stated differently, the new position for will be , whereas the new position for will be , where denotes the normalized vector .

Since the objective in this section is to thicken too thin features, herewith we consider the positive -map only. On piecewise-linear shapes, this map can be computed as described in Sections 4 and 5 by disregarding the outer part of the constrained triangulation/tetrahedrization, and by considering internal angles only when checking for acuteness. Furthermore, a partial -map can be computed as described in Section 4.3. Indeed, by using as a threshold to stop the process, we can achieve a much faster computation.

After such a computation, each vertex can be categorized in three ways, depending on the value of its partial -map: in the remainder, an acute vertex is a vertex mapped to a zero value, while a thin vertex is a vertex mapped to a positive value which is lower than the thickness threshold . Any other vertex is just thick and is not considered by the algorithm. While computing the -map we keep track of the antipodean point for each thin vertex. If such a point does not coincide with a vertex, before proceeding with the thickening we split the simplex that contains it, so as to have a vertex to displace.

For the sake of simplicity, we now assume that there are no acute vertices. Their treatment is described later in Sections 6.1 and 6.2. Thus, each thin vertex with antipodean is associated to a displacement vector . Similarly, each antipodean vertex of is associated to a displacement vector . If the same vertex has several “roles” (e.g. it is the antipodean of two different vertices), its displacement vectors are summed.

When this displacement vector field is complete, the actual displacement may take place. However, even this operation must be undertaken with a particular care. Indeed, if two thin features are separated by an insufficient space, their uncontrolled growth might bring one to intersect the other. To avoid this, we keep the original triangulation/tetrahedrization (outer parts included) and, for each single displacement, we check that no flip occurs among the simplexes incident at the displaced vertex. We observe that the need for such a check reveals that our problem may have no solution if we do not allow topological changes. However, if topological changes are acceptable, we just let the surfaces intersect with each other and track the so-called outer hull in a second phase as described in Attene (2014).

The following two subsections describe how to pre-process the model so as to remove possible acute vertices and make the model ready to be thickened as proposed.

6.1 Pre-processing 2D shapes

Clearly, our thickening approach is not suitable for acute vertices which are mapped to a zero value and for which no antipodean point exists. Thus, preprocessing is performed to remove acute vertices before thickening. To remove acute angles, we imagine to cut off a small portion of the shape around each acute vertex, and to fill the generated hole. Specifically, let be an acute vertex. A cutting line is defined, which intersects the internal angle bisector and is perpendicular to it. The distance between and the cutting line may be arbitrarily small. Then, for each edge incident at , is replaced by the intersection point between the cutting line and . Finally, an additional edge is added to close the polygon (Figure 8). Then, the two algorithms described in Section 4.1 and 4.2 are exploited to update the -map with thickness information related to the modified parts of the shape.

Figure 8: Removing acute angles. A cutting line is exploited which is perpendicular to the internal bisector of the angle.

6.2 Pre-processing 3D models

Getting rid of acute vertices in 3D is not as easy as in 2D due to the possible presence of saddles. To solve this problem, we just consider the set of all the triangles which are incident at acute vertices and, on such a set only, we adaptively run a subdivision step using the modified interpolating Butterfly scheme by Zorin and colleagues

Zorin et al. (1996). Then, the three methods proposed in Section 5 are exploited to update the -map restrictedly to the modified part of the model. The process is repeated as long as acute angles are found, and convergence is guaranteed because the subdivision generates a continuous surface at the limit (Fig. 9). To reduce the area of the modification, before running the aforementioned procedure we subdivide the considered triangles without changing the geometry (i.e. each edge is split at its midpoint) and keep working only on the subtriangles which are incident at acute vertices. This can be done as many times as necessary to keep the changes within a tolerable limit.

Figure 9: A model with acute vertices (left) and the triangles incident at these vertices (middle). After local subdivision, all the vertices are non-acute (right).

7 Results and Discussion

We implemented both our analysis and thickening methods in C++ with the support of Tetgen Si (2015) for the computation of the constrained tetrahedrizations. This section reports the results of some of our experiments on a set of 3D input meshes coming from DSW (????). Our prototype software provides the possibility to set an analysis direction, that is, the user can choose whether to compute a positive, a negative, or a bi-directional -map. Also, if the complete -map is not necessary, the user can set a threshold value to compute only a partial -map. In the latter case the region growing is interrupted as soon as the ball radius reaches the threshold.

Figure 10 depicts two complete and bi-directional -maps, whereas Figures 11(a) and 11(c) show a positive and a negative complete map respectively.

(a) Model 197
(b) Model 121
Figure 10: Complete -maps. The color palette goes from blue (thinnest parts) to red (thickest parts).

Figures 11(b) and 11(d) depict examples of positive and negative partial -maps respectively. Note that the two partial maps highlight thin features detected in the model and in its complementary part respectively. As expected, the thin bridge connecting the two fingers on the right is detected as the thinnest feature in the object, while the cavity generated by the bridge and the gap between fingers is detected as thinnest features in its complementary part. An additional partial negative -map is shown in Figure 12.

Figure 11: Detecting thin features in Model 187. Colors go from blue (thinnest parts) to red (thickest parts). LABEL: Complete positive -map. LABEL: Complete negative -map. LABEL: Partial positive -map. LABEL: Partial negative -map.
Figure 12: A negative partial -map and a focus on detected thin features in the complementary part of the object.

During the thickening, the result of the analysis is exploited to modify the input shape only where thickness is lower than the input threshold, while any other part is kept unchanged. Figure 13 shows both the complete and the partial positive -maps for Model 372. Note that the latter is sufficient to run the thickening. In the example in Figure 13(c), only the handle and the spout of the teapot are edited as expected.

Figure 13: Thickening. LABEL: The complete positive -map. LABEL: The partial positive -map. LABEL: The result of our thickening algorithm.

In Figures 16, two positive -maps are shown, which represent thickness information before and after the application of our thickening algorithm. Our heuristic approach actually increases the minimum thickness of the shape, but in some cases it is not sufficient to achieve the desired thickness threshold in one single iteration. In these cases, the whole algorithm can be run again, and we could verify that this process converged to an actual (positive) -shape is all of our experiments. Figure 17 show some additional results.

Execution time

Experiments were run on a Linux machine (1.9GHz Intel Core i7, 16GB RAM). Table 1 shows the execution times referred to the computation of complete and partial -maps. As expected, our algorithm is sensibly accelerated when a thickness threshold is specified.

At a theoretical level, the time needed to characterize a single vertex grows as the number of tetrahedra to be visited increases. The worst case is represented by a convex polyhedron whose vertices are all on the surface of a sphere. In this case, for each vertex tetrahedra must be visited, which means that the overall complexity is . In contrast, if a shape is very thin everywhere and the tetrahedral mesh has no unnecessary internal Steiner points, then the antipodean is normally found in the first few iterations. The use of a threshold to compute a partial map simulates this behaviour. These theoretical considerations are confirmed by our experiments. As an example, the computational times referred to Model 192 (i.e. the hand) and Model 372 (i.e. the teapot) are sensibly different, even if both models have about 13.5K triangles.

Table 1 reports the time spent by our algorithm to compute the complete bi-directional -map for a set of test models. These values were measured while running a sequential version of the algorithm, where simplexes are analyzed one after the other. However, it should be considered that our method is embarrassing parallel, that is, several simplexes of the same degree can be simultaneously analyzed. This is true for both the computation of complete and partial -maps. Our implementation considers this aspect and exploits OpenMP Dagum and Menon (1998) to enable such a parallelism. Thanks to this parallel implementation, we succeeded in reducing the running time by a factor of 2.5 on our 4-core machine.

Input Input Complete Partial Model
Triangles -map -map shown in
Model 400 7736 2720 45 Figure 14
Model 121 12282 1995 405 Figure 10(b)
Model 372 13376 15380 320 Figure 13
Model 192 13978 6295 430 Figure 14
Model 197 15102 8250 422 Figure 10(a)
Model 397 15242 10640 420 Figure 14
Model 187 15678 13400 487 Figure 10
Model 130 15998 6605 586 Figure 14
Table 1: Execution times in seconds. Partial -maps are computed by setting the thickness threshold to of the bounding box diagonal of the input mesh.

7.1 Comparison

The Shape Diameter Function (SDF) can be considered the state-of-the-art tool for thickness evaluation on meshes. Its implementation available in CGAL cga (2016) has been exploited to analyze our dataset and compare the results. Figure 14 shows a comparison between complete -maps and SDF. As expected, in some cases the two thickness-based descriptors provide a similar representation of the input shape.

Figure 14: Comparison between SDF and -shapes. For each input mesh, SDF is on the left and the complete -map is on the right.

Nevertheless, some relevant differences are visible. Consider thickness evaluation at the extremities of the shape (e.g. the fingertips of the hand in Figure 14) and at some points closest to very narrow features (Figure 15). In both cases, the statistic average of the distances applied by the SDF provides an unexpected result. In the former case, most of the rays casted from an extremal point touch the surface closest to the point itself. The opposite happens when a narrow bottleneck is present. By increasing the number of casted rays a more precise result may be achieved but, though this would have a significant impact on the performances, no guarantee can be given in any case.

Our method solves this limitation and computes the exact thickness value even at these points according to our definition. Note that this exact value is guaranteed to be calculated and does not depend on any user-defined input parameter. Also, through our method narrow features can be efficiently detected by means of a partial -map.

(a) SDF
(b) Ours
Figure 15: Comparison between SDF and -shapes. The statistic average of the distances applied by the SDF returns a high thickness value even at the bottom of the cylinder under the cut. Our method evaluates this area as the thinnest area of the object.
Figure 16: Thickening Model 124. Vertices mapped to a “thick” attribute are grey-colored. Thin vertices are underlined by colors: from blue (far from the thickness threshold) to red (closed to the threshold). LABEL: The partial positive -map of the input shape. LABEL: The partial positive -map after the application of our thickening algorithm (one iteration). LABEL: The partial positive -map of the final thickened object.
Figure 17: Results. For each mesh, a partial positive -map is shown on the left, while the result of our thickening algorithm is shown on the right. Vertices mapped to a “thick” attribute are grey-colored. Thin vertices are underlined by colors: from blue (far from the thickness threshold) to red (closed to the threshold).

8 Conclusions and Future Work

We have shown that both 2D polygons and 3D polyhedra can be automatically characterized based on their local thickness. Our novel mathematical formulation allows to implement effective algorithms that associate an exact thickness value to any relevant point of the object, without the need to set any unintuitive parameter, and without the need to rely on estimates or approximations. Furthermore, we have shown how our analysis can be used to thicken thin features, so that models which are inappropriate for certain applications become suitable thanks to an automatic local editing.

While the analysis is rigorously defined, our thickening algorithm is still based on heuristics and cannot guarantee a successful result in all the cases. Stated differently, if is the threshold thickness used to perform the thickening, we would like to guarantee that the thickened model is an -shape. Therefore, an interesting direction for future research is represented by the study of thickening methods that provide such a guarantee, and it is easy to see that such methods must necessarily be free to change the local topology.


This project has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 680448 (CAxMan). Thanks are due to all the members of the Shapes and Semantics Modeling Group at IMATI-CNR and in particular to Silvia Biasotti for helpful discussions.



  • Shapira et al. (2008) Shapira, L., Shamir, A., Cohen-Or, D.. Consistent mesh partitioning and skeletonisation using the shape diameter function. The Visual Computer 2008;24(4):249. doi:10.1007/s00371-007-0197-5.
  • Schmitt et al. (2015) Schmitt, W., Sotomayor, J.L., Telea, A., Silva, C.T., Comba, J.L.D.. A 3d shape descriptor based on depth complexity and thickness histograms. In: 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images. 2015, p. 226–233. doi:10.1109/SIBGRAPI.2015.51.
  • Osswald et al. (2008) Osswald, T.A., Turng, L.S., Gramann, P.J.. Injection molding handbook. Hanser Verlag; 2008.
  • Telea and Jalba (2011) Telea, A., Jalba, A.. Voxel-based assessment of printability of 3d shapes. In: Proceedings of the 10th International Conference on Mathematical Morphology and Its Applications to Image and Signal Processing. ISMM’11; Berlin, Heidelberg: Springer-Verlag. ISBN 978-3-642-21568-1; 2011, p. 393–404. URL
  • Hergel and Lefebvre (2015) Hergel, J., Lefebvre, S.. 3D Fabrication of 2D Mechanisms. Computer Graphics Forum 2015;doi:10.1111/cgf.12555.
  • Choi et al. (1997) Choi, H., Choi, S., Moon, H.. Mathematical theory of medial axis transform 1997;.
  • Amenta et al. (2001) Amenta, N., Choi, S., Kolluri, R.K.. The power crust, unions of balls, and the medial axis transform. Computational Geometry 2001;19(2-3):127–153.
  • Miklos et al. (2010) Miklos, B., Giesen, J., Pauly, M.. Discrete scale axis representations for 3d geometry. In: ACM SIGGRAPH 2010 Papers. SIGGRAPH ’10; New York, NY, USA: ACM. ISBN 978-1-4503-0210-4; 2010, p. 101:1–101:10. doi:10.1145/1833349.1778838. URL
  • Li et al. (2015) Li, P., Wang, B., Sun, F., Guo, X., Zhang, C., Wang, W.. Q-mat: Computing medial axis transform by quadratic error minimization. ACM Transactions on Graphics (TOG) 2015;35(1):8.
  • Hildebrand and Rüegsegger (1997) Hildebrand, T., Rüegsegger, P.. A new method for the model-independent assessment of thickness in three-dimensional images. Journal of microscopy 1997;185(1):67–75.
  • Dougherty and Kunzelmann (2007) Dougherty, R., Kunzelmann, K.. Computing Local Thickness of 3D Structures with ImageJ. Microscopy and Microanalysis 2007;13(Suppl 2):1678–1679.
  • Yezzi and Prince (2003) Yezzi, A.J., Prince, J.L.. An eulerian pde approach for computing tissue thickness. IEEE transactions on medical imaging 2003;22(10):1332–1339.
  • Maragos (1996) Maragos, P.. Differential morphology and image processing. IEEE Transactions on Image Processing 1996;5(6):922–937. doi:10.1109/83.503909.
  • Str (????) Stratasys. ???? URL
  • Lambourne et al. (2005) Lambourne, J., Djuric, Z., Brujic, D., Ristic, M.. Calculation and visualisation of the thickness of 3d cad models. In: Shape Modeling and Applications, 2005 International Conference. IEEE; 2005, p. 338–342.
  • Kovacic et al. (2010) Kovacic, M., Guggeri, F., Marras, S., Scateni, R.. Fast approximation of the shape diameter function.

    In: Proceedings Workshop on Computer Graphics, Computer Vision and Mathematics (GraVisMa); vol. 5. 2010,.

  • Rolland-Nevière et al. (2013) Rolland-Nevière, X., Doërr, G., Alliez, P.. Robust diameter-based thickness estimation of 3d objects. Graphical Models 2013;75(6):279–296.
  • Wang and Chen (2013) Wang, C., Chen, Y.. Thickening freeform surfaces for solid fabrication. Rapid Prototyping Journal 2013;19(6):395–406.
  • Calderon and Boubekeur (2014) Calderon, S., Boubekeur, T.. Point morphology. ACM Trans Graph 2014;33(4):45:1–45:13. doi:10.1145/2601097.2601130.
  • Stava et al. (2012) Stava, O., Vanek, J., Benes, B., Carr, N., Měch, R.. Stress relief: Improving structural strength of 3d printable objects. ACM Trans Graph 2012;31(4):48:1–48:11. doi:10.1145/2185520.2185544. URL
  • Zhou et al. (2013) Zhou, Q., Panetta, J., Zorin, D.. Worst-case structural analysis. ACM Trans Graph 2013;32(4):137:1–137:12. doi:10.1145/2461912.2461967. URL
  • Chen and Han (1990) Chen, J., Han, Y.. Shortest paths on a polyhedron. In: Proceedings of the sixth annual symposium on Computational geometry. ACM; 1990, p. 360–369.
  • Edelsbrunner and Mücke (1990) Edelsbrunner, H., Mücke, E.P.. Simulation of simplicity: A technique to cope with degenerate cases in geometric algorithms. ACM Trans Graph 1990;9(1):66–104. doi:10.1145/77635.77639. URL
  • Attene (2014) Attene, M.. Direct repair of self-intersecting meshes. Graph Models 2014;76(6):658–668. doi:10.1016/j.gmod.2014.09.002. URL
  • Zorin et al. (1996) Zorin, D., Schröder, P., Sweldens, W.. Interpolating subdivision for meshes with arbitrary topology. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ’96; New York, NY, USA: ACM. ISBN 0-89791-746-4; 1996, p. 189–192. doi:10.1145/237170.237254. URL
  • Si (2015) Si, H.. Tetgen, a delaunay-based quality tetrahedral mesh generator. ACM Trans Math Softw 2015;41(2):11:1–11:36. doi:10.1145/2629697. URL
  • DSW (????) Digital shape workbench. ???? URL
  • Dagum and Menon (1998) Dagum, L., Menon, R.. Openmp: An industry-standard api for shared-memory programming. IEEE Comput Sci Eng 1998;5(1):46–55. doi:10.1109/99.660313. URL
  • cga (2016) Cgal 4.9 – triangulated surface mesh segmentation. 2016. URL