Taylor approximation for chance constrained optimization problems governed by partial differential equations with high-dimensional random parameters

11/19/2020
by   Peng Chen, et al.
0

We propose a fast and scalable optimization method to solve chance or probabilistic constrained optimization problems governed by partial differential equations (PDEs) with high-dimensional random parameters. To address the critical computational challenges of expensive PDE solution and high-dimensional uncertainty, we construct surrogates of the constraint function by Taylor approximation, which relies on efficient computation of the derivatives, low rank approximation of the Hessian, and a randomized algorithm for eigenvalue decomposition. To tackle the difficulty of the non-differentiability of the inequality chance constraint, we use a smooth approximation of the discontinuous indicator function involved in the chance constraint, and apply a penalty method to transform the inequality constrained optimization problem to an unconstrained one. Moreover, we design a gradient-based optimization scheme that gradually increases smoothing and penalty parameters to achieve convergence, for which we present an efficient computation of the gradient of the approximate cost functional by the Taylor approximation. Based on numerical experiments for a problem in optimal groundwater management, we demonstrate the accuracy of the Taylor approximation, its ability to greatly accelerate constraint evaluations, the convergence of the continuation optimization scheme, and the scalability of the proposed method in terms of the number of PDE solves with increasing random parameter dimension from one thousand to hundreds of thousands.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 18

page 19

page 21

10/28/2020

A fast and scalable computational framework for large-scale and high-dimensional Bayesian optimal experimental design

We develop a fast and scalable computational framework to solve large-sc...
11/09/2021

TTRISK: Tensor Train Decomposition Algorithm for Risk Averse Optimization

This article develops a new algorithm named TTRISK to solve high-dimensi...
09/24/2019

D3M: A deep domain decomposition method for partial differential equations

A state-of-the-art deep domain decomposition method (D3M) based on the v...
12/03/2019

Implementing a smooth exact penalty function for general constrained nonlinear optimization

We build upon Estrin et al. (2019) to develop a general constrained nonl...
02/07/2020

Discretization and Machine Learning Approximation of BSDEs with a Constraint on the Gains-Process

We study the approximation of backward stochastic differential equations...
11/30/2020

Derivative-Informed Projected Neural Networks for High-Dimensional Parametric Maps Governed by PDEs

Many-query problems, arising from uncertainty quantification, Bayesian i...
06/08/2021

Conditional Deep Inverse Rosenblatt Transports

We present a novel offline-online method to mitigate the computational b...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.