Taylor approximation for chance constrained optimization problems governed by partial differential equations with high-dimensional random parameters

11/19/2020
by   Peng Chen, et al.
0

We propose a fast and scalable optimization method to solve chance or probabilistic constrained optimization problems governed by partial differential equations (PDEs) with high-dimensional random parameters. To address the critical computational challenges of expensive PDE solution and high-dimensional uncertainty, we construct surrogates of the constraint function by Taylor approximation, which relies on efficient computation of the derivatives, low rank approximation of the Hessian, and a randomized algorithm for eigenvalue decomposition. To tackle the difficulty of the non-differentiability of the inequality chance constraint, we use a smooth approximation of the discontinuous indicator function involved in the chance constraint, and apply a penalty method to transform the inequality constrained optimization problem to an unconstrained one. Moreover, we design a gradient-based optimization scheme that gradually increases smoothing and penalty parameters to achieve convergence, for which we present an efficient computation of the gradient of the approximate cost functional by the Taylor approximation. Based on numerical experiments for a problem in optimal groundwater management, we demonstrate the accuracy of the Taylor approximation, its ability to greatly accelerate constraint evaluations, the convergence of the continuation optimization scheme, and the scalability of the proposed method in terms of the number of PDE solves with increasing random parameter dimension from one thousand to hundreds of thousands.

READ FULL TEXT

page 18

page 19

page 21

research
01/20/2023

State-constrained Optimization Problems under Uncertainty: A Tensor Train Approach

We propose an algorithm to solve optimization problems constrained by pa...
research
05/31/2023

Efficient PDE-Constrained optimization under high-dimensional uncertainty using derivative-informed neural operators

We propose a novel machine learning framework for solving optimization p...
research
11/09/2021

TTRISK: Tensor Train Decomposition Algorithm for Risk Averse Optimization

This article develops a new algorithm named TTRISK to solve high-dimensi...
research
10/28/2020

A fast and scalable computational framework for large-scale and high-dimensional Bayesian optimal experimental design

We develop a fast and scalable computational framework to solve large-sc...
research
02/07/2020

Discretization and Machine Learning Approximation of BSDEs with a Constraint on the Gains-Process

We study the approximation of backward stochastic differential equations...
research
12/03/2019

Implementing a smooth exact penalty function for general constrained nonlinear optimization

We build upon Estrin et al. (2019) to develop a general constrained nonl...
research
06/20/2022

A globally convergent method to accelerate large-scale optimization using on-the-fly model hyperreduction: application to shape optimization

We present a numerical method to efficiently solve optimization problems...

Please sign up or login with your details

Forgot password? Click here to reset