Hyper-differential sensitivity analysis with respect to model discrepancy: Calibration and optimal solution updating

10/17/2022
by   Joseph Hart, et al.
0

Optimization constrained by computational models is common across science and engineering. However, in many cases a high-fidelity model of the system cannot be optimized due to its complexity and computational cost. Rather, a low(er)-fidelity model is constructed to enable intrusive and many query algorithms needed for large-scale optimization. As a result of the discrepancy between the high and low-fidelity models, the optimal solution determined using the low-fidelity model is frequently far from true optimality. In this article we introduce a novel approach which uses limited high-fidelity data to calibrate the model discrepancy in a Bayesian framework and propagate it through the optimization problem. The result provides both an improvement in the optimal solution and a characterization of uncertainty due to the limited accessibility of high-fidelity data. Our formulation exploits structure in the post-optimality sensitivity operator to ensure computational scalability. Numerical results demonstrate how an optimal solution computed using a low-fidelity model may be significantly improved with as little as one evaluation of a high-fidelity model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset