1 Introduction
One of the driving forces behind the success of deep computer vision models is the socalled “deep image prior" of convolutional neural networks (CNNs). This phrase loosely describes a set of inductive biases, present even in untrained models, that make them effective for image processing. Researchers have taken advantage of this effect to perform inpainting, noise removal, and superresolution on images with an untrained model
Ulyanov et al. (2018).There is growing evidence that this implicit prior extends to domains beyond natural images. Some examples include style transfer in fonts Azadi et al. (2018)
, uncertainty estimation in fluid dynamics
Zhu et al. (2019), and data upsampling in medical imaging Dittmer et al. (2018). Indeed, whenever data contains translation invariance, spatial correlation, or multiscale features, the deep image prior may be a useful tool.One field where these characteristics are important – and where the deep image prior is underexplored – is computational science and engineering. Here, parameterization is extremely important – substituting one parameterization for another has a dramatic effect. Consider, for example, the task of designing a multistory building via structural optimization. The goal is to distribute a certain quantity of building material over a twodimensional grid in order to maximize the resilience of the structure. As Figure 1 shows, different optimization methods (LBFGS (Liu & Nocedal, 1989) vs. MMA (Svanberg, 1987)) and parameterizations (pixels vs. neural net) have big consequences for the final design.
How can we harness the deep image prior to better solve problems in computational science? In this paper, we propose reparameterizing optimization problems from the basis of a grid to the basis of a neural network. We use this approach to solve 116 structural optimization tasks and obtain solutions that are quantitatively and qualitatively better than the baselines.
2 Methods
While we apply our approach to structural optimization in this paper, we emphasize that it is generally applicable to a wide range of optimization problems in computational science. The core strategy is to write the physics model in an automatic differentiation package with support for neural networks, such as TensorFlow, PyTorch, or Jax. We emphasize that the differentiable physics model need not be written from scratch: adjoint models, as these are known in the physical sciences, are widely used
Plessix & E. Plessix (2006); Errico (1997); Giles & Pierce (2000), and software packages exist for computing them automatically Farrell et al. (2013).The full computational graph begins with a neural network forward pass, proceeds to enforcing constraints and running the physics model, and ends with a scalar loss function (“compliance" in the context of structural optimization). Figure
2 gives an overview of this process. Once we have created this graph, we can recover the original optimization problem by performing gradient descent on the inputs to the constraint step ( in Figure 2). Then we can reparameterize the problem by optimizing the weights and inputs ( and ) of a neural network which outputs .Structural optimization. We demonstrate our reparameterization approach on the domain of structural optimization. The goal of structural optimization is to use a physics simulation to design loadbearing structures, given constraints such as conservation of volume. We focus on the general case of freeform design without configuration constraints, known as topology optimization Bendsoe & Sigmund (2013).
Following the “modified SIMP" approach described by Andreassen et al. (2011), we begin with a discretized domain of linear finite elements on a regular square grid. The physical density at grid element (or pixel) is computed by applying a conefilter with radius 2 on the input densities . Then, letting be the global stiffness matrix,
the displacement vector,
the vector of applied forces, and the total volume, we can write the optimization objective as:(1) 
We implemented this algorithm in NumPy, SciPy and Autograd Maclaurin et al. (2015). The computationally limiting step is the linear solve , for which we use a sparse Cholesky factorization Chen et al. (2008).
One key challenge was enforcing the volume and density constraints of Equation (1). Standard topology optimization methods satisfy these constraints directly, but only when directly optimizing the design variables
. Our solution was to enforce the constraints in the forward pass, by mapping unconstrained logits
into valid densities with a constrained sigmoid transformation:(2) 
where is solved for via binary search on the volume constraint. In the backwards pass, we differentiate through the transformation at the optimal point using implicit differentiation Griewank & Faure (2002).
A note on baselines. Structural optimization problems are sensitive not only to choice of parameterization but also to choice of optimization algorithm. Unfortunately, standard topology optimization algorithms like the Method of Moving Asymptotes (MMA) Svanberg (1987) and the Optimality Criteria (OC) Bendsøe (1995) are illsuited for training neural networks. How, then, can we separate the effect of parameterization from choice of optimizer? Our solution was to use a standard gradientbased optimizer, LBFGS Nocedal (1980)
, to train both the neural network parameterization (CNNLBFGS) and the pixel parameterization (PixelLBFGS). We found LBFGS to be significantly more effective than stochastic gradient descent when optimizing a single design, similar to findings for style transfer
Gatys et al. (2016).Since constrained optimization is often much more effective at topology optimization (in pixel space, at least), we also report the MMA and OC results. In practice, we found that these provided stronger baselines than PixelLBFGS. Figure 3 is a good example: it shows structural optimization of an MBB beam using the three baselines. All methods except PixelLBFGS converge to similar, nearoptimal solutions.
Choosing the 116 tasks. In designing the 116 structural optimization tasks, our goal was to create a distribution of diverse, wellstudied problems with realworld significance. We started with a selection of problems from Valdez et al. (2017) and Sokół (2011). Most of these classic problems are simple beams with only a few forces, so we handdesigned additional tasks reflecting realworld designs including bridges with various support restrictions, trees, ramps, walls and buildings. The final tasks fall into 28 categories, with and between to elements.
Neural network methods. Our convolutional neural network architecture was inspired by the Unet architecture used in the Deep Image Prior paper Ulyanov et al. (2018). We were only interested in the parameterization capabilities of the this model, so we used the second, upsampling half of the model. We also made the first activation vector ( in Figure 2) into a trainable parameter. Our model consisted of a dense layer into image channels, followed by five repetitions of tanh nonlinearity,
x bilinear resize (for the middle three layers), global normalization by subtracting the mean and dividing by the standard deviation, a 2D convolution layer and a learned bias over all elements/channels. The convolutional layers used
kernels and had , , , , and channels respectively.3 Analysis
We found that reparameterizing structural optimization problems with a neural network gave equal performance to MMA on small problems and compellingly better performance on large problems. On both small and large problems, it produced much better designs than OC and PixelLBFGS.
For each task, we report typical (median over 101 random seeds for the CNN, constant initialization for the other models^{1}^{1}1Constant initialization was better than the median for all baseline models.) performance and “bestofensemble" performance (with the same initialization for all models, taken from the untrained CNN). Figure 4 summarizes our results; its second column of plots show how on large problems (defined by grid points) the CNNLBFGS solutions were more likely to have low error rates.
Why do large problems benefit more? We were curious as to why large problems had more to gain from reparameterization. Returning to the literature, we found that finite grids suffer from a “meshdependency problem": solutions tend to vary as grid resolution changes Sigmund & Petersson (1998). When grid resolution is high, smallscale “spiderweb" structures tend to form first and then interfere with the development of largescale structures. We suspected that optimizing the weights of a CNN allowed us to instead optimize structures at several spatial scales at once, thus improving the optimization dynamics. To investigate this idea, we plotted structures from all 116 design tasks (see ancilliary files). Then we chose five examples to highlight and showcase important qualitative trends (Figure 5).
Reparameterized designs are often simpler. The CNNLBFGS designs have fewer “spiderweb" artifacts as shown in the cantilever beam, MBB beam, and suspended bridge examples. On the cantilever beam, CNNLBFGS used a total of eight supports whereas PixelMMA used eighteen. We see simpler structures as evidence that the CNN biased optimization towards largescale structure. This effect was particularly pronounced for large problems, supporting our theory as to why they benefited more.
Convergence to different solutions. We also noted that the baseline structures resembled each other more closely than they did CNNLBFGS. In the thin support bridge example, the baseline designs feature double support columns whereas CNNLBFGS used a single support with treelike branching patterns. In the roof task, the baselines use branching patterns but the CNNLBFGS uses pillars.
4 Related work
Parameterizing topology optimization. The most common parameterization for topology optimization is a grid mesh Andreassen et al. (2011); Sigmund (2001); Zhu et al. (2016). Sometimes, polyhedral meshes are used Gain et al. (2015). Some domainspecific structural optimizations feature locallyrefined meshes and multiple load case adjustments Krog et al. (2004). Like locally refined meshes, our method permits structure optimization at multiple scales. Unlike them, our method permits optimization on both scales at once.
Neural networks and topology optimization. Several papers have proposed replacing topology optimization methods with CNNs Banga et al. (2018); Sosnovik & Oseledets (2019); Alter (2018); Jiang et al. (2019). Most of them begin by creating a dataset of structures via regular topology optimization and then training a model on the dataset. While doing so can reduce computation, it comes at the expense of relaxing physics and design constraints. More problematically, these models can only reproduce their training data. In contrast, our approach produces better designs that also obey exact physics constraints. One recent work resembles ours in that they use adjoint gradients to train a CNN model Jiang & Fan (2019). Their goal was to learn a joint, conditional model over a range of related tasks, which is different from our goal of reparameterizing a single structure.
5 Conclusions
Choice of parameterization has a powerful effect on solution quality for tasks such as structural optimization, where solutions must be computed by numerical optimization. Motivated by the observation that untrained deep image models have good inductive biases for many tasks, we reparameterized structural optimization tasks in terms of the output of a convolutional neural network (CNN). Optimization then involved training the parameters of this CNN for each task. The resulting framework produced qualitatively and quantitatively better designs on a set of 116 tasks.
References
 Alter (2018) Alter, A. Structural topology optimization using a convolutional neural network. preprint, 2018. URL http://cs230.stanford.edu/files_winter_2018/projects/6907833.pdf.
 Andreassen et al. (2011) Andreassen, E., Clausen, A., Schevenels, M., Lazarov, B. S., and Sigmund, O. Efficient topology optimization in MATLAB using 88 lines of code. Structural and Multidisciplinary Optimization, 43(1):1–16, 2011.

Azadi et al. (2018)
Azadi, S., Fisher, M., Kim, V. G., Wang, Z., Shechtman, E., and Darrell, T.
Multicontent gan for fewshot font style transfer.
In
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, June 2018.  Banga et al. (2018) Banga, S., Gehani, H., Bhilare, S., Patel, S., and Kara, L. 3d topology optimization using convolutional neural networks. arXiv preprint arXiv:1808.07440, 2018.
 Bendsøe (1995) Bendsøe, M. P. Optimization of Structural Topology, Shape, and Material. Springer, Berlin, Heidelberg, 1995.
 Bendsoe & Sigmund (2013) Bendsoe, M. P. and Sigmund, O. Topology Optimization: Theory, Methods, and Applications. Springer Science & Business Media, April 2013.
 Chen et al. (2008) Chen, Y., Davis, T. A., Hager, W. W., and Rajamanickam, S. Algorithm 887: Cholmod, supernodal sparse cholesky factorization and update/downdate. ACM Trans. Math. Softw., 35(3):22:1–22:14, October 2008. ISSN 00983500. doi: 10.1145/1391989.1391995. URL http://doi.acm.org/10.1145/1391989.1391995.
 Dittmer et al. (2018) Dittmer, S., Kluth, T., Maass, P., and Baguer, D. O. Regularization by architecture: A deep prior approach for inverse problems. arXiv preprint arXiv:1812.03889, 2018.
 Errico (1997) Errico, R. M. What is an adjoint model? Bull. Am. Meteorol. Soc., 78:2539, 1997.
 Farrell et al. (2013) Farrell, P. E., Ham, D. A., Funke, S. W., and Rognes, M. E. Automated derivation of the adjoint of HighLevel transient finite element programs. SIAM Journal on Scientific Computing, 35(4):C369–C393, 2013.
 Gain et al. (2015) Gain, A. L., Paulino, G. H., Duarte, L. S., and Menezes, I. F. Topology optimization using polytopes. Computer Methods in Applied Mechanics and Engineering, 293:411–430, 2015.
 Gatys et al. (2016) Gatys, L. A., Ecker, A. S., and Bethge, M. Image style transfer using convolutional neural networks. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
 Giles & Pierce (2000) Giles, M. B. and Pierce, N. A. An introduction to the adjoint approach to design. Flow, Turbulence and Combustion, 65(3/4):393–415, 2000.
 Griewank & Faure (2002) Griewank, A. and Faure, C. Reduced functions, gradients and hessians from fixedpoint iterations for state equations. Numerical Algorithms, 30(2):113–139, 2002.
 Jiang & Fan (2019) Jiang, J. and Fan, J. A. Global optimization of dielectric metasurfaces using a PhysicsDriven neural network. Nano Lett., 19(8):5366–5372, August 2019.
 Jiang et al. (2019) Jiang, J., Sell, D., Hoyer, S., Hickey, J., Yang, J., and Fan, J. A. Freeform diffractive metagrating design based on generative adversarial networks. ACS Nano, 13(8):8872–8878, 2019. doi: 10.1021/acsnano.9b02371. URL https://doi.org/10.1021/acsnano.9b02371. PMID: 31314492.
 Krog et al. (2004) Krog, L., Tucker, A., Kemp, M., and Boyd, R. Topology optimisation of aircraft wing box ribs. In 10th AIAA/ISSMO multidisciplinary analysis and optimization conference, pp. 4481, 2004.
 Liu & Nocedal (1989) Liu, D. C. and Nocedal, J. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(13):503–528, 1989.
 Maclaurin et al. (2015) Maclaurin, D., Duvenaud, D., and Adams, R. P. Autograd: Effortless gradients in numpy. In ICML 2015 AutoML Workshop, 2015. URL https://github.com/HIPS/autograd.
 Nocedal (1980) Nocedal, J. Updating quasinewton matrices with limited storage. Math. Comput., 35(151):773–773, 1980.
 Plessix & E. Plessix (2006) Plessix, R.E. and E. Plessix, R. A review of the adjointstate method for computing the gradient of a functional with geophysical applications. Geophys. J. Int., 167(2):495–503, 2006.
 Sigmund (2001) Sigmund, O. A 99 line topology optimization code written in matlab. Structural and multidisciplinary optimization, 21(2):120–127, 2001.
 Sigmund & Petersson (1998) Sigmund, O. and Petersson, J. Numerical instabilities in topology optimization: A survey on procedures dealing with checkerboards, meshdependencies and local minima. Structural optimization, 16:68–75, 1998.
 Sokół (2011) Sokół, T. A 99 line code for discretized michell truss optimization written in mathematica. Structural and Multidisciplinary Optimization, 43(2):181–190, 2011.
 Sosnovik & Oseledets (2019) Sosnovik, I. and Oseledets, I. Neural networks for topology optimization. Russian Journal of Numerical Analysis and Mathematical Modelling, 34(4):215–223, 2019.
 Svanberg (1987) Svanberg, K. The method of moving asymptotes—a new method for structural optimization. International Journal for Numerical Methods in Engineering, 24(2):359–373, 1987.
 Ulyanov et al. (2018) Ulyanov, D., Vedaldi, A., and Lempitsky, V. Deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9446–9454, 2018.
 Valdez et al. (2017) Valdez, S. I., Botello, S., Ochoa, M. A., Marroquín, J. L., and Cardoso, V. Topology optimization benchmarks in 2d: Results for minimum compliance and minimum volume in planar stress problems. Arch. Comput. Methods Eng., 24(4):803–839, November 2017.
 Zhu et al. (2016) Zhu, J.H., Zhang, W.H., and Xia, L. Topology optimization in aircraft and aerospace structures design. Archives of Computational Methods in Engineering, 23(4):595–622, 2016.

Zhu et al. (2019)
Zhu, Y., Zabaras, N., Koutsourelakis, P.S., and Perdikaris, P.
Physicsconstrained deep learning for highdimensional surrogate modeling and uncertainty quantification without labeled data.
Journal of Computational Physics, 394:56–81, 2019.
Comments
There are no comments yet.