Automated and Accurate Geometry Extraction and Shape Optimization of 3D Topology Optimization Results

04/11/2020 ∙ by Marco K. Swierstra, et al. ∙ University of Amsterdam Delft University of Technology 0

Designs generated by density-based topology optimization (TO) exhibit jagged and/or smeared boundaries, which forms an obstacle to their integration with existing CAD tools. Addressing this problem by smoothing or manual design adjustments is time-consuming and affects the optimality of TO designs. This paper proposes a fully automated procedure to obtain unambiguous, accurate and optimized geometries from arbitrary 3D TO results. It consists of a geometry extraction stage using a level-set-based design description involving radial basis functions, followed by a shape optimization stage involving local analysis refinements near the structural boundary using the Finite Cell Method. Well-defined bounds on basis function weights ensure that sufficient sensitivity information is available throughout the shape optimization process. Our approach results in highly smooth and accurate optimized geometries, and its effectiveness is illustrated by 2D and 3D examples.



There are no comments yet.


page 2

page 5

page 6

page 7

page 10

page 13

page 14

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Topology optimization (TO) is an increasingly popular generative design method that forms an established part of the design process in various branches of industry. The density-based approach (Bendsøe and Sigmund (2003)) is dominant and available in several commercial software packages. TO is most often used to generate design concepts in an early stage of the design process, and optimizes a material distribution defined in terms of local density variables. Typical density-based TO results, in combination with common filtering techniques, exhibit intermediate densities representing virtual semi-dense material, and jagged boundaries appear due to the use of a finite-element-based design discretization (e.g. Figures 1(left) and 2). In addition, the analysis accuracy is typically low in the TO process, due to the inaccurate representation of a smooth structural boundary and the low-order finite elements typically used. Currently, TO results are subsequently redrawn or post-processed manually for further design iterations and higher-fidelity analysis. Given the present efficient TO processes, this manual post-processing step increasingly becomes a bottleneck time-wise and also the optimality of the TO design is lost. In addition, a more seamless connection between TO and existing CAD tools for e.g. design validation is desired.

Figure 1: Three stage structural design optimization process: topology optimization, geometry extraction and shape optimization. This paper focusses on the post-processing of TO results, i.e. Stages 2 and 3.

How to bridge the gap between TO and CAD is a longstanding challenge (e.g. Olhoff et al. (1991)). Various methods have been proposed to obtain smooth, high-resolution TO results in order to reduce the aforementioned gap. Costa Jr and Alves (2003), Stainko (2006) and Nana et al. (2016) used -refinement to improve the boundary resolution and to reduce the number of design variables by clearing large void areas. Nguyen et al. (2010) decoupled the structural analysis mesh and the density design variables to obtain a higher resolution TO result at limited computational cost. Kumar and Gossard (1996)

used a shape density function, interpolated from nodal density variables, to optimize both topology and shape at once. Similarly,

Kang and Wang (2012) mapped the nodal density variables onto the material design space using Shepard interpolants, enabling a more complex spatial distribution of the material density within an individual element. Additionally, Wang et al. (2013) adaptively refined the density field by adding nodal density variables without remeshing the finite element model, while Wang et al. (2014) extended this concept with independent adaptive refinement of the analysis mesh.

The aforementioned refinement methods inevitably increase the computational cost of the TO process considerably, in particular in 3D. Rather than improving the TO result, Papalambros and Chirehdast (1990) proposed a three-stage process for structural design optimization: 1) generate a TO result, 2) process the density distribution to obtain a practical and realistic structure, and 3) refine the final topology by detailed shape optimization. Our proposed approach essentially also follows these three steps, as illustrated in Figure 1. A shape optimization is needed because the design obtained using TO loses its optimality or even feasibility as soon as it is translated into a crisp and smooth geometry. This is caused by thresholding of intermediate densities to obtain a binary image and arbitrary smoothing of the boundary. A subsequent shape optimization of the obtained geometry is used to retrieve a feasible and optimized design.

Various implementations of this general philosophy have been proposed previously. Maute and Ramm (1995) proposed two methods, one involving the previously mentioned -refinement, and the other a three-stage approach. The TO result was mapped to a spline-based geometry description, which was subsequently remeshed for another TO cycle. These cycles were repeated until the TO result became smooth and crisp. Bremicker et al. (1991) processed the TO result into a binary image from which they obtained a truss or continuum structure. Lin and Chao (2000) also firstly obtained a binary image after which they described the external boundaries using B-splines and the internal features by standard shapes. More recently, Yi and Kim (2017) developed a similar method in which geometric features of the TO result were detected to construct a parametrized CAD model. Tang and Chang (2001) and Chang and Tang (2001) extracted a geometry by averaging the obtained boundary nodes of a thresholded TO result and applied a least-square fitting using B-splines. Hsu et al. (2001) fitted B-splines on density contours to interpret the TO result.

Once obtained, a spline representation of a geometry can be used for a subsequent shape optimization. Spline representations can be obtained in 2D but automatically creating spline surfaces to represent a complex 3D TO geometry is challenging. Hsu and Hsu (2005) attempted to overcome this difficulty by sweeping 2D contours into a 3D surface. Also commercial software is available to perform this step in an interactive manner, however user input remains necessary. Furthermore, spline representations add complexity, as the transition from TO model to shape optimization model involves remeshing, and the change in design parametrization typically requires a different sensitivity analysis implementation. Most importantly, in previous works no method thus far has been fully satisfactorily demonstrated in 3D, yet the vast majority of TO cases considered in industry are three dimensional.

This paper focuses on the automated post-processing of both 2D and 3D density-based TO results in order to form a convenient bridge to other CAD tools. The aim is to obtain a structural design optimization process capable of generating optimized, smooth and crisp geometries with accurate, optimized performance, without any manual labour. This includes creating a mesh only once for the TO, and utilizing sensitivity analysis procedures that remain essentially the same throughout the structural design optimization process. To this end, we propose a fully integrated level-set-based shape optimization, where the initial level-set is constructed from the result of a density-based TO process. The end result will be an optimized geometry defined by a level-set function, which can be post-processed into other geometry representations. Note that generation of e.g. CAD-compatible NURBS descriptions is a very different challenge that falls outside the scope of this paper.

The novelty of our approach lies in Stage 2 and 3 of the design process shown in Figure 1: seamlessly combining geometry extraction and shape optimization of density-based TO results while satisfying the design constraints after each stage. To achieve this, we employ the novel combination of a parametric level-set formulation with bounded level-set gradients and the Finite Cell Method (FCM) to increase analysis accuracy. While building on these existing techniques, part of the novelty of this work lies in their specific combination, which has not been reported before in the literature. The level-set gradients are rigorously controlled to prevent convergence problems, by defining a new relation between the maximum level-set gradient and the weights of the parametrization functions. Our approach has been conceived with the aim to be easy to apply both in 2D and 3D, while most reported methods have only been demonstrated for 2D examples.

An alternative approach, to use a level-set method (LSM) from the start of the design optimization process, is not our preferred choice for three reasons. All commercial software implementations of TO use the density-based setting, so to maximize the applicability of our proposed method we choose this as our starting point. Furthermore, in general, LSMs show stronger dependence on the chosen initial design compared to density-based TO (Van Dijk et al. (2013)). The use of topological derivatives improves this aspect but these are not straightforward to formulate and differ strongly for different kinds of optimization objectives and constraints. Thirdly, we employ FCM-based local refinement to increase the analysis accuracy in the level-set-based final shape optimization. This is affordable when boundary motion remains modest, but for the full TO process it would come at high computational cost due to frequent refinement updates. Thus, instead of opting for a single method, in our view combining the strengths of density- and level-set-based approaches is most advantageous.

The following three sections discuss our approach to each of the three stages of the structural design optimization process. To illustrate the process, 2D-cases are optimized, with a focus on the common problem of minimizing compliance subject to a volume fraction constraint. Section 5 discusses the performance of the proposed method using both 2D and 3D compliance minimization case studies. Finally, in Section 6, conclusions and recommendations are given.

2 Stage 1 - Topology optimization

The first stage of the design optimization process is the topology optimization. The popular density-based SIMP method was independently introduced by Bendsøe (1989) and Zhou and Rozvany (1991). In the SIMP method one defines a density in each element of the domain as a design variable that can vary between 0 (void) and 1 (solid). The stiffness properties of an element are linked to this density using the following power law:


where is the penalization parameter and is Young’s modulus for the isotropic material. Equation 1 penalizes intermediate densities as these provide relatively low stiffness. In this paper we base the numerical examples on the well-known compliance minimization problem to illustrate the proposed post-processing method. This optimization problem is defined as:


Here denotes the compliance objective, and , and

are respectively the density design variable vector, displacement and load vector. Furthermore

represents the FE stiffness matrix, where the SIMP interpolation is used to define the dependence on . and denote the actual and allowed volume, and the last line defines the bounds on the design variables that define the material layout. For further discussion of this classical problem, we refer to e.g. Bendsøe and Sigmund (2003).

A typical TO result for this problem is shown in Figure 2, obtained using the Python version of the 99-line Matlab code of Sigmund (2001). Two features appear which make TO results not directly usable in a CAD setting. First of all, jagged boundaries appear due to the use of element-wise constant densities as a geometry description. Secondly, the result is not fully discrete because some elements have an intermediate density. In this example a coarse mesh has been chosen deliberately to highlight these aspects — in practice, the designer is free to choose the mesh appropriate for the desired level of detail and computational cost. Mesh refinement reduces the severity of the aforementioned problems, but also leads to a rapid increase in computational cost, particularly in 3D. Moreover, at the finest length scale of the generated design these problems can invariably be observed.

Figure 2: Example of an topology-optimized MBB beam on a 6432 grid using bilinear shape functions. A coarse grid is chosen to clearly illustrate typical artifacts.

3 Stage 2 - Geometry extraction

Density-based TO results exhibit jagged and blurred boundaries, while designers and established CAD tools require smooth and clear-cut geometry descriptions. Smooth boundaries can be represented using different methods. In this work, a level-set function (LSF) is used as the geometry description because of its inherently smooth characteristics. Furthermore, an LSF is relatively easy to extend to 3D compared to for example spline representations and a parametric LSF makes it possible to utilize the same sensitivity analysis as used for the TO stage.

3.1 Density field to LSF

The design represented by the optimized density distribution from the SIMP method needs to be translated into a geometry description based on an LSF. Our LSF is described by a summation of Radial Basis Functions (RBFs), similar to Luo et al. (2008), each located at the centroid of an element . Gaussian RBFs will be used and are described by:


Here is the radial distance from the location of the RBF and is the element edge length. The element edge length is chosen as 1 in this research, so is omitted in further expressions related to the RBFs. Each RBF is multiplied with a certain weight , which form the design variables in a subsequent shape optimization. The LSF is the summation of the RBFs in the design domain and thus becomes:


where is the number of elements, and a constant is subtracted to shift the LSF globally. Initially, we choose . Weights control the shape of the LSF, which defines the boundary of the structure. These weights will also be used as design variables in Stage 3. In the following, the dependencies of will be omitted for brevity.

To initialize the weights, a set of linear equations can be set up to determine the weights of the RBFs, such that the LSF matches the densities obtained at Stage 1 in all element centroids:




and where and denote respectively the centroids of elements and . While Gaussian RBFs theoretically have infinite support regions, for numerical convenience a finite support radius is considered as the RBF quickly decays to zero. The RBFs used in this research have a support diameter of 7 elements. As a result, matrix is sparse and Equation 5 can be solved with limited effort.

3.2 Thresholding the LSF

It is common in level-set-based TO methods that a function is used to describe the structural domain and the void domain, where is the interior of the structure, and is the outside region. To allow for gradient-based optimization, a continuous Heaviside approximation is used:


For further analysis, it is convenient to again define a density field, linked to the LSF. This density field is denoted by , and is given by:


Equation 8 links negative LSF-values to void regions (with density , a minimum density value used for numerical stability) and positive LSF-values to solid regions (with density 1.0). In this way, the intermediate densities observed in the design obtained from Stage 1 can be minimized in the extracted geometry. A finite value of maintains continuity of the geometry description, which allows for subsequent gradient-based shape optimization. A discussion on sensitivity analysis follows in Section 4.2.

The LSF obtained by solving Equation 5, , is fully positive because the element densities of the TO result are as well. Using Equation 7 would then result in a completely solid design domain. So, the LSF should be lowered, to an extent that the area within the zero-level contour satisfies the desired volume fraction, see Figure 3. This is equivalent to setting the appropriate threshold value. The shift value of the LSF must satisfy Equation 9:


where and is the set volume fraction of the optimization problem. Numerical integration is performed using the Gauss integration points from Stage 1. Shift value is found by solving the nonlinear single-variable problem given by Equation 9 using the Newton-Raphson method, and is kept constant in the remainder of the procedure.

Figure 3: Projection of the shifted LSF (red) to a geometry (blue) using the smooth Heaviside function of Equation 7. The image shows the LSF and geometry extracted from the SIMP example shown in Figure 2.

4 Stage 3 - Shape optimization

The weights of the RBFs , obtained in Stage 2, can be used as design variables for a subsequent shape optimization of the smooth geometry, similar to e.g. Luo et al. (2009). The aim of this optimization process is to finetune the shape of the design, however for added flexibility the chosen formulation also allows for topological changes. This has the advantage that e.g. inefficient members or structural details can be removed in this stage. Nevertheless, the main steps in defining the structure’s topology are taken in Stage 1, while in Stage 3 the focus is on refining its shape. In principle the optimization problems considered in stages 1 and 3 can differ (e.g. inclusion of additional constraints), however in this paper we keep them the same. Hence here a problem similar to Equation 2 is optimized, except that instead of density variables the weights form the design vector. RBF weights define the LSF , which in turn defines the density in every integration point in the domain.

A structural and sensitivity analysis are needed in order to perform a shape optimization using an LSM. The slope of the LSF should be controlled as well to ensure good optimization convergence (Van Dijk et al. (2013)). The following subsections discuss how these aspects are treated in the proposed method.

4.1 Structural analysis

The structural analysis of the geometry defined implicitly by the LSF is not straightforward. The original TO mesh and the LSF boundary do not match. Remeshing is an option but this is considered less robust, especially without manual intervention, and can negatively affect the consistency of design sensitivity information. We choose to instead perform the structural analysis using p-FEM (Szabó and Babuška (1991)) in combination with an adaptive numerical integration method. p-FEM allows for the use of the same grid as used for the TO and its accuracy is improved by increasing the polynomial order of the FE-interpolation functions. This is an established modeling approach, and further details can be found in e.g. Babuška et al. (1981).

Several methods exist to perform the adaptive numerical integration, see Figure 4. Abedian et al. (2013) showed that a quadtree refinement is a relatively efficient approach to gain accurate results, so this method is used for the adaptive numerical integration. In 3D, an equivalent octree refinement scheme can be applied. This combination of methods, p-FEM and an adaptive integration method, is introduced by Parvizian et al. (2007) under the name Finite Cell Method (FCM). FCM captures both the geometry and the response of the structure more accurately than the initial low-order FE-model. In this way, the same grid used in Stage 1 can be used while the geometry is described using an LSF. The obtained accuracy is verified in Section 5.3 for the performed case studies. For details on the FCM formulation, we refer to the original work by Parvizian et al. (2007).

Figure 4: Examples of adaptive numerical integration methods: ‘Ersatz material’ by e.g. Allaire et al. (2004), higher-order quadrature by e.g. Parvizian et al. (2007) and a quadtree refinement by e.g. Abedian et al. (2013). The ‘+’-signs indicate integration points.

The quadtree refinement is extended further from the boundary by approximately one element, see Figure 5. This allows the boundary to move during the shape optimization while remaining within the accurate quadtree integration band, while avoiding frequent updating of the integration datastructure during optimization. As the geometry is the result of an earlier TO process (Stage 1), boundary motion in this final shape optimization stage is typically small. Note also that no remeshing is required, only the integration cells are redefined. Elements at further distances from the structural domain are discarded.

Figure 5: FCM grid on the extracted geometry of the MBB beam of Figure 3 using one level of quadtree refinement. Red squares correspond to discarded elements and black squares to integration cells.

To analyze the design defined by the LSF using FCM, the densities (Equation 8) are evaluated in the integration points. The element stiffness matrix and subsequently the global stiffness matrix can then be assembled using the SIMP material interpolation in each integration point.

4.2 Sensitivity analysis

The sensitivity analysis for Stage 3 is very similar to Stage 1. The derivative of the objective (e.g. compliance ) with respect to the design variables (RBF weights

) is desired. The chain rule of differentiation gives:


where a summation convention applies to index , and and denote the density and LSF-value at a particular integration point , respectively.

From Equations 3 and 4, it follows that the derivative of the LSF equals . Note that this derivative is similar to the matrix in Equation 6, except the centroid location of an element is replaced by the location of an integration point . As already noted in Section 3.1, the finite support of renders a sparse operator. From Equation 8, relating the LSF in each integration point to the density, the second derivative in Equation 10 becomes:


where the definition of the smooth Heaviside (Equation 7) has also been used.

Finally, the partial derivative of the compliance with respect to the density in an integration point should be determined. This is the same sensitivity as computed in the SIMP formulation applied in Stage 1 (Bendsøe and Sigmund (2003)):


where is the displacement vector at element level, and the contribution to the element stiffness matrix at integration point . The derivative of Equation 10 can be split into two parts: the first term is similar to that of Stage 1, while the second and third terms are problem-independent and solely related to the LSM. Since our focus is on the post-processing procedure itself, the proposed design optimization process has only been implemented and tested here for minimum compliance problems. However, because of this independence it applies in a similar fashion to other types of optimization problems.

4.3 LSF slope control

In the sensitivity expression discussed above (Equation 10), the derivative of the density with respect to the LSF, , requires careful treatment. This derivative becomes zero in the entire domain for high values of in Equation 7, i.e. for a very steep Heaviside function, except for the structural boundary where it approaches infinity. This extreme spatial variation and localization of sensitivity information results in slow shape changes and convergence problems Van Dijk et al. (2013). To counteract this, the LSF and its spatial derivative should be controlled, to be able to use a fixed value for . To avoid steep gradients of the LSF, different approaches exist in the literature. For conventional LSMs, regular signed-distance reinitialization is common, e.g. Allaire et al. (2004). To avoid the cost and inaccuracy associated to reinitialization, others have proposed adding an artificial regularization energy term to the objective (Zhu et al. (2015)). Setting the required strength of this term is however not trivial. In the context of parametric level-set approaches, as used in this paper, previous studies have used bounds on the RBF weights (Pingen et al. (2010)) but so far these have been chosen arbitrarily. As part of the proposed method, we here present a new mathematical relation between the maximum LSF slope and the bounds on the weights, to rigorously control the LSF steepness. Our approach is to determine the steepest LSF slope that can occur for a given bound on the RBF weights, and use this insight to choose steepness parameter to ensure sufficient sensitivity magnitudes near the structural boundary.

The maximum value that an LSF can take, can be related to the maximum weight of the RBFs. An 1D-RBF located at and evaluated at , is given by:


and its derivative reads:


An infinite summation of RBFs located at integer -locations , evaluated at , is numerically found to be:


where is the elliptic theta function. The maximum LSF-value for a higher dimensional LSF is simply:


as found by numerical validation, where is the dimension of the design (e.g. for a 2D-case).

Considering an extreme weight distribution with , the maximum possible slope occurs at the transition from the minimum LSF-value to the maximum one, as depicted in Figure 6. The summation of the derivatives of the RBFs is:


where , because this is where the maximum slope occurs (i.e. right in between the transition from negative to positive as can be observed in Figure 6).

Figure 6: Infinite summation of 1D RBFs located at integer -locations . RBF weights are set to , i.e. for RBFs located at , and for those located at .

The maximum slope scales linearly with the maximum value of an RBF summation. Using this linear relationship between Equations 16 and 17, results in the following general expression for the maximum gradient magnitude of an LSF bounded by the weights :


Next, a suitable value for , which controls the steepness of the Heaviside approximation (Equation 7), can be determined using the maximum possible value of Equation 18 and the average distance between integration points,


where is the number of quadtree levels and the -FEM shape function order, which sets the amount of Gauss points inside an integration cell. The maximum distance between the boundary and the nearest integration point is . The most critical situation occurs when the boundary ) falls exactly between two integration points (Equation 7

). From this we can estimate the following upper bound on the corresponding absolute LSF-values in these integration points:


In these points, the sensitivity should not drop below a certain minimum magnitude. Substitution of in Equation 11 gives:


which can be numerically solved for when setting a chosen minimum value for the derivative in an integration point. This minimum value for the derivative should not be close to zero because then there would be no sensitivities defined around the boundary. A too large value would result in a boundary region containing an unnecessary large amount of intermediate densities, as controls the width of the transition zone between solid and void region (Equation 7). Between these extremes, many values are suited to limit the gradient magnitude. In this research a value of 0.5 is chosen as the minimum value for the derivative of the density with respect to the LSF. This value in combination with and , results in . Figure 7 illustrates the procedure to determine a fixed value for .

Figure 7: Illustration of determining the steepness of the Heaviside function based on the maximum distance between two integration points nearest to the boundary ().

5 Case studies

Both 2D-cases and 3D-cases are presented in this section, with two examples each. One example is the MBB beam and the other is a cantilever beam, see respectively Figure 8a and 8b for the boundary conditions in 2D. The boundary conditions of the 3D case studies are shown in Figure 9a and 9b. The cases have been chosen such that complex topologies emerge, that provide a good test of the proposed method. The compliance of the examples is minimized given a certain maximum volume fraction. For simplicity, the same optimization problem is used in all stages to demonstrate the method. The Method of Moving Asymptotes (MMA) proposed by Svanberg (1987) is used as the optimization algorithm.

(a) 2D MBB beam: vertical load is applied on the top left corner, left edge is fixed horizontally and bottom right is fixed vertically.
(b) 2D Cantilever beam: vertical load is applied on the middle of the right edge and the left edge is completely fixed.
Figure 8: The boundary conditions of the 2D case studies used to illustrate the proposed method.
(a) 3D MBB beam: vertical load is applied on the top middle node denoted by an arrow, spheres denote nodes fixed in x-direction and cubes denote nodes fixed in y- and z-direction.
(b) 3D Cantilever beam: two loads are applied as denoted by the arrows and four nodes are completely fixed denoted by the cones.
Figure 9: The boundary conditions of the 3D case studies used to illustrate the proposed method.

In all cases the TO of Stage 1 is performed using bi/trilinear shape functions. The shape optimization of Stage 3 is done using 2nd order p-FEM and one level quadtree or octree refinement near the boundary. The amount of Gauss points is 22(2) for Stage 1 and 33(3) in each integration cell for Stage 3 in 2D (or 3D). Other parameters used in the case studies are shown in Table 1. As will be shown in Section 5.3, this choice provides an attractive compromise between accuracy gain and computational cost.

2D Canti
3D Canti
Table 1: Parameter values used for the case studies.

The results from the case studies are obtained using a fixed amount of 100 iterations for the topology optimization and two times 10 iterations for the shape optimization. The quadtree or octree integration band is reinitialized after the first 10 iterations of the shape optimization to make sure the boundary of the geometry does not move outside this region. A possible improvement of the algorithm could be to apply this reinitialization only when the evolving boundary reaches the edge of the refined integration band. The convergence of the normalized compliance over the three design stages for the different case studies is shown in Figure 10. The graph shows a steady improvement of the objective in each stage, and that the chosen number of iterations is adequate. The transformation from the final density-based design to the initial level-set description (Stage 2) already improves the compliance, mainly because inefficient intermediate density regions are removed while maintaining the maximum volume. Subsequently Stage 3 adds further improvement.

Figure 10: Convergence of the normalized compliance over the three design stages for the different case studies. The performance of the initial design of Stage 1 is used as a reference.

Next, we first present the numerical results of our post-processing method, followed by an analysis of speed and accuracy.

5.1 2D-cases

The MBB beam is optimized on a 6432 grid, allowing a volume fraction of 40%. The results after each of the three different stages are shown in Figure 12a. The geometry extracted from the TO results is already smooth and shows almost no intermediate densities. Stage 2 creates a direct smooth representation of the jagged boundaries, hence the wavy pattern for the extracted geometry. These patterns then disappear in the final design due to the shape optimization. In all cases, the volume constraint is satisfied after both Stage 2 and 3. It can be observed that a clear solid-void geometry with smooth boundaries is obtained, even when starting from a fairly coarse TO result with considerable artifacts.
It can be verified that the sensitivities are still defined close to the boundary for the final design after the shape optimization. Around the boundary there are always integration points having a non-zero sensitivity value, see Figure 11.

Figure 11: Sensitivity values of the MBB beam after the shape optimization.

The cantilever beam is optimized on a finer 180120 grid, allowing a volume fraction of 35%. This case illustrates the application of the procedure to a case with a more refined initial design. In spite of the higher resolution, jagged boundaries are still present in the TO result, as well as a blurred central region. The results after the three different stages are shown in Figure 12b. This case study shows the capability of the proposed post-processing method to deal with intermediate densities. The blurred central spot is removed in the optimization stage and has disappeared in the final design obtained after Stage 3.

5.2 3D-cases

The MBB beam is also optimized in 3D on a 641032 grid, allowing a volume fraction of 10%. The 3D results are visualized by applying the Marching Cubes algorithm Lorensen and Cline (1987) to create a triangulated surface where the LSF is zero, which can be directly used to generate an STL input file for additive manufacturing, or processed further by other CAD tools. The MBB beams after the three different stages are shown in Figure 13a. This case study shows a change in the topology from Stage 2 to Stage 3. Two small members at the rear have disappeared due to the final optimization phase. This confirms that the level-set-based optimization in Stage 3 is not restricted to shape optimization, but can also handle topological changes.

The cantilever beam is also optimized in 3D but then on a 303030 grid, allowing a volume fraction of 5%. The loads are applied at two locations on the side plane in both vertical and transverse direction, in order to increase the complexity of the design to test the post-processing procedure. The results after the three different stages are shown in Figure 13b. Also here it can be seen that the proposed process generates a smooth, high-quality result from a relatively coarse 3D TO result, without any manual intervention.

(a) The MBB beam on a 6432 grid.
(b) The cantilever beam on a 180120 grid.
Figure 12: Results of the proposed three-stage structural design optimization process (topology optimization, geometry extraction and shape optimization) for the 2D case studies.
(a) The 3D version of the MBB beam on a 641032 grid.
(b) The 3D version of the cantilever beam, with alternative load placement and directions, on a 303030 grid.
Figure 13: Results of the proposed three-stage structural design optimization process (topology optimization, geometry extraction and shape optimization) for the 3D case studies.

5.3 Performance

Next to the visual evaluation in the preceding sections, that confirms the smoothness and crispness of the generated designs, we also evaluate the performance of the proposed post-processing method on two additional aspects: speed and accuracy.

The computation time of the proposed post-processing method is difficult to quantify in a general sense, as it depends strongly on the chosen volume fraction, convergence criteria and desired accuracy. There also are possibilities for different trade-offs, in terms of resolution and iterations spent in Stage 1 (TO phase) versus effort spent on subsequent geometry extraction and shape optimization. No attempt has been made to optimize the total process in this regard.

For the considered example problems, computation times are listed in Table 2. We find the post-processing (Stage 2 and 3 combined) on average takes more time than the TO phase, excluding the 2D cantilever case. So, in the current implementation, the proposed post-processing method takes a higher computational effort than the (considerably coarser) TO stage. However, the employed Python code has not yet been optimized as the readability of the code was also considered important, and it leaves room for efficiency improvements. Moreover, comparing computation times of Stage 1 and Stages 2+3 is not a comparison on equal grounds. To achieve similar quality (smoothness, discreteness, analysis accuracy) using TO alone would require extensive refinement and considerably higher computational effort.

Case Stage 1 Stage 2 Stage 3 Stage 2+3
2D MBB 20 1 22 53%
2D Canti 371 6 167 32%
3D MBB 1,203 53 3,108 72%
3D Canti 1,454 80 2,369 63%
Table 2: Computation times (s) for the case studies.

As an example to support this claim, the grid size is increased on the Stage 1 result of the MBB beam, and an additional twenty iterations of density-based topology optimization are performed to improve the resolution. The computation times are much higher as can be observed in Table 3, especially compared to the proposed 3-staged method. Figure 14 shows the improved resolutions using the SIMP method. Even at a fourfold refinement level, at over 10 times the computational cost of the proposed post-processing method, the refined TO-only design still does not match the post-processed result shown in Figure 12a.

Grid size Computation time (s)
64x32 3.5
128x64 24.6
256x128 262.3
Table 3: Computation times for an additional twenty iterations of density-based topology optimization on different grid resolutions for the 2D MBB beam case study. For reference: Post-processing the base TO result (Stage 2+3) took 23 s.
Figure 14: Improved resolution of the TO result of the MBB beam using an additional 20 iterations on a 64x32, an 128x64 and a 256x128 grid.

The accuracy of the final result mainly depends on the accuracy of the structural analysis (i.e. order of p-FEM and the number of quadtree levels). Figure 15a shows the normalized compliance for increasing p-FEM order of the final designs obtained at Stage 1 and Stage 3 of the 2D case studies. The geometries based on an LSF are evaluated using four quadtree levels. All integration cells, also for the TO result, contain 88 integration points to make sure there is no dominant integration error. For the Stage 3 results, is used for the Heaviside function. The relatively highest increase in accuracy of the structural analysis is obtained by increasing from 1st order p-FEM to 2nd order.

Note that -FEM orders of 6 or higher would be needed to reach numerical convergence. The associated computational cost is not considered practical for the design stage. For this reason, we find that =2 provides a good compromise between cost and accuracy. Note also that all Stage 3 cases outperform the Stage 1 results, i.e. the post-processing process improves the performance of the designs by 10-15%, as can be observed in Figure 10 as well.
Figure 15b shows the normalized compliance for increasing levels of quadtree refinement of the final designs obtained at Stage 3 of the 2D case studies. The use of more than one quadtree level, in combination with 2nd order p-FEM and 33 Gauss integration points, does not influence the compliance of the analysed designs significantly, justifying the use of only one quadtree level for the case studies.

(a) Normalized compliance for different order p-FEM after Stage 1 and Stage 3 for both 2D case studies.
(b) Normalized compliance for different levels quadtree refinement on the design after Stage 3 of both 2D case studies.
Figure 15: Normalized compliance results for different FCM accuracy.

6 Conclusion

In this paper, we have presented a method for automated post-processing of both 2D and 3D density-based TO results using a fully integrated level-set-based shape optimization, in order to efficiently improve the quality of TO results and to form a convenient bridge to other CAD tools. The aim was to obtain a structural design optimization process which results in optimized, smooth and crisp geometries without any manual labour. Based on the numerical examples, it can be concluded that this goal has been achieved. In contrast to existing approaches, the method extends naturally to 3D and produces optimized designs, instead of only performing a smoothing step for visual appearance. The chosen level-set-based formulation actually allows for topological changes in the final optimization, which enables the removal of inefficient structural features.
Our method exploits p-refinement to avoid remeshing, and second order shape functions combined with a single quadtree/octree refinement proved sufficient to obtain accurate shape optimization results. This is linked to the way the maximum slope of the employed LSF is controlled, by bounding the weights of the involved RBFs. A rigorous procedure to set suitable weight bounds forms another contribution of this paper.

As future work, it will be of interest to test this approach on other TO problems involving different objectives and constraints. The presented approach and the associated sensitivity analysis can easily be extended to other types of optimization problems, e.g. the use of stress constraints as done in Cai et al. (2014) and Cai and Zhang (2015). Also, it was found that choices can be made whether to spend most computational effort in the initial TO stage, or in the final shape optimization. How to optimally balance these two steps is also still to be investigated. In the considered examples, the computational time required for the proposed post-processing steps is more on average than that used in the TO stage, but efficiency improvement potential remains for the current implementation. Compared to improving designs through manual post-processing or higher resolution TO, the proposed fully automated procedure is certainly an attractive option. The developed Python code for the 2D case studies is made available and can be found on Github via this link.


We gratefully acknowledge the contribution of J.C. Bakker, M. Barbera, N.D. Bos and S.J. van Elsloo to the Python script written for this research.
We would like to thank Femto Engineering as well for their partnership in this research.


  • Abedian et al. (2013) A. Abedian, J. Parvizian, A. Düster, H. Khademyzadeh, and E. Rank. Performance of different integration schemes in facing discontinuities in the finite cell method. International Journal of Computational Methods, 10(03):1350002, 2013.
  • Allaire et al. (2004) G. Allaire, F. Jouve, and A.-M. Toader. Structural optimization using sensitivity analysis and a level-set method. Journal of Computational Physics, 194(1):363–393, 2004.
  • Babuška et al. (1981) I. Babuška, B. A. Szabó, and I. N. Katz. The p-version of the finite element method. SIAM Journal on Numerical Analysis, 18(3):515–545, 1981.
  • Bendsøe (1989) M. P. Bendsøe. Optimal shape design as a material distribution problem. Structural Optimization, 1(4):193–202, 1989.
  • Bendsøe and Sigmund (2003) M. P. Bendsøe and O. Sigmund. Topology Optimization - Theory, Methods and Applications. Springer, 2003.
  • Bremicker et al. (1991) M. Bremicker, M. Chirehdast, N. Kikuchi, and P. Papalambros. Integrated topology and shape optimization in structural design. Journal of Structural Mechanics, 19(4):551–587, 1991.
  • Cai and Zhang (2015) S. Cai and W. Zhang. Stress constrained topology optimization with free-form design domains. Computer Methods in Applied Mechanics and Engineering, 289:267–290, 2015.
  • Cai et al. (2014) S. Cai, W. Zhang, J. Zhu, and T. Gao. Stress constrained shape and topology optimization with fixed mesh: A B-spline finite cell method combined with level set function. Computer Methods in Applied Mechanics and Engineering, 278:361–387, 2014.
  • Chang and Tang (2001) K.-H. Chang and P.-S. Tang. Integration of design and manufacturing for structural shape optimization. Advances in Engineering Software, 32(7):555–567, 2001.
  • Costa Jr and Alves (2003) J. C. A. Costa Jr and M. K. Alves. Layout optimization with h-adaptivity of structures. International Journal for Numerical Methods in Engineering, 58(1):83–102, 2003.
  • Hsu and Hsu (2005) M.-H. Hsu and Y.-L. Hsu. Interpreting three-dimensional structural topology optimization results. Computers & Structures, 83(4-5):327–337, 2005.
  • Hsu et al. (2001) Y.-L. Hsu, M.-S. Hsu, and C.-T. Chen. Interpreting results from topology optimization using density contours. Computers & Structures, 79(10):1049–1058, 2001.
  • Kang and Wang (2012) Z. Kang and Y. Wang. A nodal variable method of structural topology optimization based on shepard interpolant. International Journal for Numerical Methods in Engineering, 90(3):329–342, 2012.
  • Kumar and Gossard (1996) A. Kumar and D. Gossard. Synthesis of optimal shape and topology of structures. Journal of Mechanical Design, 118(1):68–74, 1996.
  • Lin and Chao (2000) C.-Y. Lin and L.-S. Chao. Automated image interpretation for integrated topology and shape optimization. Structural and Multidisciplinary Optimization, 20(2):125–137, 2000.
  • Lorensen and Cline (1987) W. E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3D surface construction algorithm. In ACM siggraph computer graphics, volume 21, pages 163–169. ACM, 1987.
  • Luo et al. (2008) Z. Luo, M. Y. Wang, S. Wang, and P. Wei. A level set-based parameterization method for structural shape and topology optimization. International Journal for Numerical Methods in Engineering, 76(1):1–26, 2008.
  • Luo et al. (2009) Z. Luo, L. Tong, and Z. Kang. A level set method for structural shape and topology optimization using radial basis functions. Computers & Structures, 87(7-8):425–434, 2009.
  • Maute and Ramm (1995) K. Maute and E. Ramm. Adaptive topology optimization. Structural Optimization, 10(2):100–112, 1995.
  • Nana et al. (2016) A. Nana, J.-C. Cuillière, and V. Francois. Towards adaptive topology optimization. Advances in Engineering Software, 100:290–307, 2016.
  • Nguyen et al. (2010) T. H. Nguyen, G. H. Paulino, J. Song, and C. H. Le. A computational paradigm for multiresolution topology optimization (MTOP). Structural and Multidisciplinary Optimization, 41(4):525–539, 2010.
  • Olhoff et al. (1991) N. Olhoff, M. P. Bendsøe, and J. Rasmussen. On CAD-integrated structural topology and design optimization. Computer Methods in Applied Mechanics and Engineering, 89(1-3):259–279, 1991.
  • Papalambros and Chirehdast (1990) P. Y. Papalambros and M. Chirehdast. An integrated environment for structural configuration design. Journal of Engineering Design, 1(1):73–96, 1990.
  • Parvizian et al. (2007) J. Parvizian, A. Düster, and E. Rank. Finite cell method. Computational Mechanics, 41(1):121–133, 2007.
  • Pingen et al. (2010) G. Pingen, M. Waidmann, A. Evgrafov, and K. Maute. A parametric level-set approach for topology optimization of flow domains. Structural and Multidisciplinary Optimization, 41(1):117–131, 2010.
  • Sigmund (2001) O. Sigmund. A 99 line topology optimization code written in Matlab. Structural and Multidisciplinary Optimization, 21(2):120–127, 2001.
  • Stainko (2006) R. Stainko. An adaptive multilevel approach to the minimal compliance problem in topology optimization. Communications in Numerical Methods in Engineering, 22(2):109–118, 2006.
  • Svanberg (1987) K. Svanberg. The method of moving asymptotes — a new method for structural optimization. International Journal for Numerical Methods in Engineering, 24(2):359–373, 1987.
  • Szabó and Babuška (1991) B. A. Szabó and I. Babuška. Finite element analysis. John Wiley & Sons, 1991.
  • Tang and Chang (2001) P.-S. Tang and K.-H. Chang. Integration of topology and shape optimization for design of structural components. Structural and Multidisciplinary Optimization, 22(1):65–82, 2001.
  • Van Dijk et al. (2013) N. P. Van Dijk, K. Maute, M. Langelaar, and F. Van Keulen. Level-set methods for structural topology optimization: a review. Structural and Multidisciplinary Optimization, 48(3):437–472, 2013.
  • Wang et al. (2013) Y. Wang, Z. Kang, and Q. He. An adaptive refinement approach for topology optimization based on separated density field description. Computers & Structures, 117:10–22, 2013.
  • Wang et al. (2014) Y. Wang, Z. Kang, and Q. He. Adaptive topology optimization with independent error control for separated displacement and density fields. Computers & Structures, 135:50–61, 2014.
  • Yi and Kim (2017) G. Yi and N. H. Kim. Identifying boundaries of topology optimization results using basic parametric features. Structural and Multidisciplinary Optimization, 55(5):1641–1654, 2017.
  • Zhou and Rozvany (1991) M. Zhou and G. Rozvany. The COC algorithm, Part II: topological, geometrical and generalized shape optimization. Computer Methods in Applied Mechanics and Engineering, 89(1-3):309–336, 1991.
  • Zhu et al. (2015) B. Zhu, X. Zhang, and S. Fatikow. Structural topology and shape optimization using a level set method with distance-suppression scheme. Computer Methods in Applied Mechanics and Engineering, 283:1214–1239, 2015.