Log In Sign Up

Neural reparameterization improves structural optimization

by   Stephan Hoyer, et al.

Structural optimization is a popular method for designing objects such as bridge trusses, airplane wings, and optical devices. Unfortunately, the quality of solutions depends heavily on how the problem is parameterized. In this paper, we propose using the implicit bias over functions induced by neural networks to improve the parameterization of structural optimization. Rather than directly optimizing densities on a grid, we instead optimize the parameters of a neural network which outputs those densities. This reparameterization leads to different and often better solutions. On a selection of 116 structural optimization tasks, our approach produces an optimal design 50


page 1

page 2

page 3

page 4


A Tutorial on Structural Optimization

Structural optimization is a useful and interesting tool. Unfortunately,...

Dynamic Optimization of Neural Network Structures Using Probabilistic Modeling

Deep neural networks (DNNs) are powerful machine learning models and hav...

Learning to simulate and design for structural engineering

In the architecture and construction industries, structural design for l...

On Structural Parameterizations of the Offensive Alliance Problem

The Offensive Alliance problem has been studied extensively during the l...

Visualization of topology optimization designs with representative subset selection

An important new trend in additive manufacturing is the use of optimizat...

Implicit Quantile Neural Networks for Jet Simulation and Correction

Reliable modeling of conditional densities is important for quantitative...

Designing Volumetric Truss Structures

We present the first algorithm for designing volumetric Michell Trusses....

1 Introduction

Figure 1: A multi-story building task. Figure (a) is a structure optimized in CNN weight space. Figures (b) and (c) are structures optimized in pixel space.

One of the driving forces behind the success of deep computer vision models is the so-called “deep image prior" of convolutional neural networks (CNNs). This phrase loosely describes a set of inductive biases, present even in untrained models, that make them effective for image processing. Researchers have taken advantage of this effect to perform inpainting, noise removal, and super-resolution on images with an untrained model

Ulyanov et al. (2018).

There is growing evidence that this implicit prior extends to domains beyond natural images. Some examples include style transfer in fonts Azadi et al. (2018)

, uncertainty estimation in fluid dynamics

Zhu et al. (2019), and data upsampling in medical imaging Dittmer et al. (2018). Indeed, whenever data contains translation invariance, spatial correlation, or multi-scale features, the deep image prior may be a useful tool.

One field where these characteristics are important – and where the deep image prior is under-explored – is computational science and engineering. Here, parameterization is extremely important – substituting one parameterization for another has a dramatic effect. Consider, for example, the task of designing a multi-story building via structural optimization. The goal is to distribute a certain quantity of building material over a two-dimensional grid in order to maximize the resilience of the structure. As Figure 1 shows, different optimization methods (LBFGS (Liu & Nocedal, 1989) vs. MMA (Svanberg, 1987)) and parameterizations (pixels vs. neural net) have big consequences for the final design.

How can we harness the deep image prior to better solve problems in computational science? In this paper, we propose reparameterizing optimization problems from the basis of a grid to the basis of a neural network. We use this approach to solve 116 structural optimization tasks and obtain solutions that are quantitatively and qualitatively better than the baselines.

2 Methods

Figure 2:

Schema of our approach to reparameterizing a structural optimization problem with a neural network. Each of these steps – the CNN parameterization, the constraint step, and the physics simulation – is differentiable. We implement the forward pass as a TensorFlow graph and compute gradients via automatic differentiation.

While we apply our approach to structural optimization in this paper, we emphasize that it is generally applicable to a wide range of optimization problems in computational science. The core strategy is to write the physics model in an automatic differentiation package with support for neural networks, such as TensorFlow, PyTorch, or Jax. We emphasize that the differentiable physics model need not be written from scratch: adjoint models, as these are known in the physical sciences, are widely used

Plessix & E. Plessix (2006); Errico (1997); Giles & Pierce (2000), and software packages exist for computing them automatically Farrell et al. (2013).

The full computational graph begins with a neural network forward pass, proceeds to enforcing constraints and running the physics model, and ends with a scalar loss function (“compliance" in the context of structural optimization). Figure

2 gives an overview of this process. Once we have created this graph, we can recover the original optimization problem by performing gradient descent on the inputs to the constraint step ( in Figure 2). Then we can reparameterize the problem by optimizing the weights and inputs ( and ) of a neural network which outputs .

Structural optimization. We demonstrate our reparameterization approach on the domain of structural optimization. The goal of structural optimization is to use a physics simulation to design load-bearing structures, given constraints such as conservation of volume. We focus on the general case of free-form design without configuration constraints, known as topology optimization Bendsoe & Sigmund (2013).

Following the “modified SIMP" approach described by Andreassen et al. (2011), we begin with a discretized domain of linear finite elements on a regular square grid. The physical density at grid element (or pixel) is computed by applying a cone-filter with radius 2 on the input densities . Then, letting be the global stiffness matrix,

the displacement vector,

the vector of applied forces, and the total volume, we can write the optimization objective as:


We implemented this algorithm in NumPy, SciPy and Autograd Maclaurin et al. (2015). The computationally limiting step is the linear solve , for which we use a sparse Cholesky factorization Chen et al. (2008).

One key challenge was enforcing the volume and density constraints of Equation (1). Standard topology optimization methods satisfy these constraints directly, but only when directly optimizing the design variables

. Our solution was to enforce the constraints in the forward pass, by mapping unconstrained logits

into valid densities with a constrained sigmoid transformation:


where is solved for via binary search on the volume constraint. In the backwards pass, we differentiate through the transformation at the optimal point using implicit differentiation Griewank & Faure (2002).

A note on baselines. Structural optimization problems are sensitive not only to choice of parameterization but also to choice of optimization algorithm. Unfortunately, standard topology optimization algorithms like the Method of Moving Asymptotes (MMA) Svanberg (1987) and the Optimality Criteria (OC) Bendsøe (1995) are ill-suited for training neural networks. How, then, can we separate the effect of parameterization from choice of optimizer? Our solution was to use a standard gradient-based optimizer, L-BFGS Nocedal (1980)

, to train both the neural network parameterization (CNN-LBFGS) and the pixel parameterization (Pixel-LBFGS). We found L-BFGS to be significantly more effective than stochastic gradient descent when optimizing a single design, similar to findings for style transfer

Gatys et al. (2016).

Figure 3: Comparing baselines on the MBB beam example, on a grid. Whereas Pixel-LBFGS and CNN-LBFGS use the same optimizer, we found that MMA and OC are much stronger baselines, so we decided to report all three.

Since constrained optimization is often much more effective at topology optimization (in pixel space, at least), we also report the MMA and OC results. In practice, we found that these provided stronger baselines than Pixel-LBFGS. Figure 3 is a good example: it shows structural optimization of an MBB beam using the three baselines. All methods except Pixel-LBFGS converge to similar, near-optimal solutions.

Choosing the 116 tasks. In designing the 116 structural optimization tasks, our goal was to create a distribution of diverse, well-studied problems with real-world significance. We started with a selection of problems from Valdez et al. (2017) and Sokół (2011). Most of these classic problems are simple beams with only a few forces, so we hand-designed additional tasks reflecting real-world designs including bridges with various support restrictions, trees, ramps, walls and buildings. The final tasks fall into 28 categories, with and between to elements.

Neural network methods. Our convolutional neural network architecture was inspired by the U-net architecture used in the Deep Image Prior paper Ulyanov et al. (2018). We were only interested in the parameterization capabilities of the this model, so we used the second, upsampling half of the model. We also made the first activation vector ( in Figure 2) into a trainable parameter. Our model consisted of a dense layer into image channels, followed by five repetitions of tanh nonlinearity,

x bilinear resize (for the middle three layers), global normalization by subtracting the mean and dividing by the standard deviation, a 2D convolution layer and a learned bias over all elements/channels. The convolutional layers used

kernels and had , , , , and channels respectively.

3 Analysis

Figure 4: Empirical distribution of the relative error across design tasks. The -axes measure design error relative to the best overall design. The

-axes measure the probability that the method’s solution has an error below the

-axis threshold.

We found that reparameterizing structural optimization problems with a neural network gave equal performance to MMA on small problems and compellingly better performance on large problems. On both small and large problems, it produced much better designs than OC and Pixel-LBFGS.

For each task, we report typical (median over 101 random seeds for the CNN, constant initialization for the other models111Constant initialization was better than the median for all baseline models.) performance and “best-of-ensemble" performance (with the same initialization for all models, taken from the untrained CNN). Figure 4 summarizes our results; its second column of plots show how on large problems (defined by grid points) the CNN-LBFGS solutions were more likely to have low error rates.

Why do large problems benefit more? We were curious as to why large problems had more to gain from reparameterization. Returning to the literature, we found that finite grids suffer from a “mesh-dependency problem": solutions tend to vary as grid resolution changes Sigmund & Petersson (1998). When grid resolution is high, small-scale “spiderweb" structures tend to form first and then interfere with the development of large-scale structures. We suspected that optimizing the weights of a CNN allowed us to instead optimize structures at several spatial scales at once, thus improving the optimization dynamics. To investigate this idea, we plotted structures from all 116 design tasks (see ancilliary files). Then we chose five examples to highlight and showcase important qualitative trends (Figure 5).

Figure 5: Qualitative examples of structural optimization via reparameterization. The scores below each structure measure relative difference between the design and the best overall design in that row. The “best of ensemble” CNN-parameterized solutions were best or near-best (score 0.005) in 99 out of 116 tasks including these five, vs 66 out of 116 tasks for MMA. The CNN solutions are qualitatively different from the baselines and often involve simpler and more effective structures.

Reparameterized designs are often simpler. The CNN-LBFGS designs have fewer “spiderweb" artifacts as shown in the cantilever beam, MBB beam, and suspended bridge examples. On the cantilever beam, CNN-LBFGS used a total of eight supports whereas Pixel-MMA used eighteen. We see simpler structures as evidence that the CNN biased optimization towards large-scale structure. This effect was particularly pronounced for large problems, supporting our theory as to why they benefited more.

Convergence to different solutions. We also noted that the baseline structures resembled each other more closely than they did CNN-LBFGS. In the thin support bridge example, the baseline designs feature double support columns whereas CNN-LBFGS used a single support with treelike branching patterns. In the roof task, the baselines use branching patterns but the CNN-LBFGS uses pillars.

4 Related work

Parameterizing topology optimization. The most common parameterization for topology optimization is a grid mesh Andreassen et al. (2011); Sigmund (2001); Zhu et al. (2016). Sometimes, polyhedral meshes are used Gain et al. (2015). Some domain-specific structural optimizations feature locally-refined meshes and multiple load case adjustments Krog et al. (2004). Like locally refined meshes, our method permits structure optimization at multiple scales. Unlike them, our method permits optimization on both scales at once.

Neural networks and topology optimization. Several papers have proposed replacing topology optimization methods with CNNs Banga et al. (2018); Sosnovik & Oseledets (2019); Alter (2018); Jiang et al. (2019). Most of them begin by creating a dataset of structures via regular topology optimization and then training a model on the dataset. While doing so can reduce computation, it comes at the expense of relaxing physics and design constraints. More problematically, these models can only reproduce their training data. In contrast, our approach produces better designs that also obey exact physics constraints. One recent work resembles ours in that they use adjoint gradients to train a CNN model Jiang & Fan (2019). Their goal was to learn a joint, conditional model over a range of related tasks, which is different from our goal of reparameterizing a single structure.

5 Conclusions

Choice of parameterization has a powerful effect on solution quality for tasks such as structural optimization, where solutions must be computed by numerical optimization. Motivated by the observation that untrained deep image models have good inductive biases for many tasks, we reparameterized structural optimization tasks in terms of the output of a convolutional neural network (CNN). Optimization then involved training the parameters of this CNN for each task. The resulting framework produced qualitatively and quantitatively better designs on a set of 116 tasks.


  • Alter (2018) Alter, A. Structural topology optimization using a convolutional neural network. preprint, 2018. URL
  • Andreassen et al. (2011) Andreassen, E., Clausen, A., Schevenels, M., Lazarov, B. S., and Sigmund, O. Efficient topology optimization in MATLAB using 88 lines of code. Structural and Multidisciplinary Optimization, 43(1):1–16, 2011.
  • Azadi et al. (2018) Azadi, S., Fisher, M., Kim, V. G., Wang, Z., Shechtman, E., and Darrell, T. Multi-content gan for few-shot font style transfer. In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , June 2018.
  • Banga et al. (2018) Banga, S., Gehani, H., Bhilare, S., Patel, S., and Kara, L. 3d topology optimization using convolutional neural networks. arXiv preprint arXiv:1808.07440, 2018.
  • Bendsøe (1995) Bendsøe, M. P. Optimization of Structural Topology, Shape, and Material. Springer, Berlin, Heidelberg, 1995.
  • Bendsoe & Sigmund (2013) Bendsoe, M. P. and Sigmund, O. Topology Optimization: Theory, Methods, and Applications. Springer Science & Business Media, April 2013.
  • Chen et al. (2008) Chen, Y., Davis, T. A., Hager, W. W., and Rajamanickam, S. Algorithm 887: Cholmod, supernodal sparse cholesky factorization and update/downdate. ACM Trans. Math. Softw., 35(3):22:1–22:14, October 2008. ISSN 0098-3500. doi: 10.1145/1391989.1391995. URL
  • Dittmer et al. (2018) Dittmer, S., Kluth, T., Maass, P., and Baguer, D. O. Regularization by architecture: A deep prior approach for inverse problems. arXiv preprint arXiv:1812.03889, 2018.
  • Errico (1997) Errico, R. M. What is an adjoint model? Bull. Am. Meteorol. Soc., 78:2539, 1997.
  • Farrell et al. (2013) Farrell, P. E., Ham, D. A., Funke, S. W., and Rognes, M. E. Automated derivation of the adjoint of High-Level transient finite element programs. SIAM Journal on Scientific Computing, 35(4):C369–C393, 2013.
  • Gain et al. (2015) Gain, A. L., Paulino, G. H., Duarte, L. S., and Menezes, I. F. Topology optimization using polytopes. Computer Methods in Applied Mechanics and Engineering, 293:411–430, 2015.
  • Gatys et al. (2016) Gatys, L. A., Ecker, A. S., and Bethge, M. Image style transfer using convolutional neural networks. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • Giles & Pierce (2000) Giles, M. B. and Pierce, N. A. An introduction to the adjoint approach to design. Flow, Turbulence and Combustion, 65(3/4):393–415, 2000.
  • Griewank & Faure (2002) Griewank, A. and Faure, C. Reduced functions, gradients and hessians from fixed-point iterations for state equations. Numerical Algorithms, 30(2):113–139, 2002.
  • Jiang & Fan (2019) Jiang, J. and Fan, J. A. Global optimization of dielectric metasurfaces using a Physics-Driven neural network. Nano Lett., 19(8):5366–5372, August 2019.
  • Jiang et al. (2019) Jiang, J., Sell, D., Hoyer, S., Hickey, J., Yang, J., and Fan, J. A. Free-form diffractive metagrating design based on generative adversarial networks. ACS Nano, 13(8):8872–8878, 2019. doi: 10.1021/acsnano.9b02371. URL PMID: 31314492.
  • Krog et al. (2004) Krog, L., Tucker, A., Kemp, M., and Boyd, R. Topology optimisation of aircraft wing box ribs. In 10th AIAA/ISSMO multidisciplinary analysis and optimization conference, pp. 4481, 2004.
  • Liu & Nocedal (1989) Liu, D. C. and Nocedal, J. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1-3):503–528, 1989.
  • Maclaurin et al. (2015) Maclaurin, D., Duvenaud, D., and Adams, R. P. Autograd: Effortless gradients in numpy. In ICML 2015 AutoML Workshop, 2015. URL
  • Nocedal (1980) Nocedal, J. Updating quasi-newton matrices with limited storage. Math. Comput., 35(151):773–773, 1980.
  • Plessix & E. Plessix (2006) Plessix, R.-E. and E. Plessix, R. A review of the adjoint-state method for computing the gradient of a functional with geophysical applications. Geophys. J. Int., 167(2):495–503, 2006.
  • Sigmund (2001) Sigmund, O. A 99 line topology optimization code written in matlab. Structural and multidisciplinary optimization, 21(2):120–127, 2001.
  • Sigmund & Petersson (1998) Sigmund, O. and Petersson, J. Numerical instabilities in topology optimization: A survey on procedures dealing with checkerboards, mesh-dependencies and local minima. Structural optimization, 16:68–75, 1998.
  • Sokół (2011) Sokół, T. A 99 line code for discretized michell truss optimization written in mathematica. Structural and Multidisciplinary Optimization, 43(2):181–190, 2011.
  • Sosnovik & Oseledets (2019) Sosnovik, I. and Oseledets, I. Neural networks for topology optimization. Russian Journal of Numerical Analysis and Mathematical Modelling, 34(4):215–223, 2019.
  • Svanberg (1987) Svanberg, K. The method of moving asymptotes—a new method for structural optimization. International Journal for Numerical Methods in Engineering, 24(2):359–373, 1987.
  • Ulyanov et al. (2018) Ulyanov, D., Vedaldi, A., and Lempitsky, V. Deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9446–9454, 2018.
  • Valdez et al. (2017) Valdez, S. I., Botello, S., Ochoa, M. A., Marroquín, J. L., and Cardoso, V. Topology optimization benchmarks in 2d: Results for minimum compliance and minimum volume in planar stress problems. Arch. Comput. Methods Eng., 24(4):803–839, November 2017.
  • Zhu et al. (2016) Zhu, J.-H., Zhang, W.-H., and Xia, L. Topology optimization in aircraft and aerospace structures design. Archives of Computational Methods in Engineering, 23(4):595–622, 2016.
  • Zhu et al. (2019) Zhu, Y., Zabaras, N., Koutsourelakis, P.-S., and Perdikaris, P.

    Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data.

    Journal of Computational Physics, 394:56–81, 2019.