Topology Optimization under Uncertainty using a Stochastic Gradient-based Approach

by   Subhayan De, et al.

Topology optimization under uncertainty (TOuU) often defines objectives and constraints by statistical moments of geometric and physical quantities of interest. Most traditional TOuU methods use gradient-based optimization algorithms and rely on accurate estimates of the statistical moments and their gradients, e.g, via adjoint calculations. When the number of uncertain inputs is large or the quantities of interest exhibit large variability, a large number of adjoint (and/or forward) solves may be required to ensure the accuracy of these gradients. The optimization procedure itself often requires a large number of iterations, which may render the ToUU problem computationally expensive, if not infeasible, due to the overall large number of required adjoint (and/or forward) solves. To tackle this difficulty, we here propose an optimization approach that generates a stochastic approximation of the objective, constraints, and their gradients via a small number of adjoint (and/or forward) solves, per optimization iteration. A statistically independent (stochastic) approximation of these quantities is generated at each optimization iteration. This approach results in a total cost that is a small factor larger than that of the corresponding deterministic TO problem. First, we investigate the stochastic gradient descent (SGD) method and a number of its variants, which have been successfully applied to large-scale optimization problems for machine learning. Second, we study the use of stochastic approximation approach within conventional nonlinear programming methods, focusing on the Globally Convergent Method of Moving Asymptotes (GCMMA), a widely utilized scheme in deterministic TO. The performance of these algorithms is investigated with structural design problems utilizing Solid Isotropic Material with Penalization (SIMP) based optimization, as well as an explicit level set method.



There are no comments yet.



Bilevel stochastic methods for optimization and machine learning: Bilevel stochastic descent and DARTS

Two-level stochastic optimization formulations have become instrumental ...

Adaptive Optimization with Examplewise Gradients

We propose a new, more general approach to the design of stochastic grad...

Semi-Stochastic Gradient Descent Methods

In this paper we study the problem of minimizing the average of a large ...

Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method

We develop and analyze a procedure for gradient-based optimization that ...

Random Multi-Constraint Projection: Stochastic Gradient Methods for Convex Optimization with Many Constraints

Consider convex optimization problems subject to a large number of const...

Fail-safe optimization of viscous dampers for seismic retrofitting

This paper presents a new optimization approach for designing minimum-co...

B-Splines for Sparse Grids: Algorithms and Application to Higher-Dimensional Optimization

In simulation technology, computationally expensive objective functions ...

Code Repositories


Implementation of Stochastic Gradient Descent algorithms in Python (cite

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.