Conditional Karhunen-Loève expansion for uncertainty quantification and active learning in partial differential equation models

04/17/2019
by   Ramakrishna Tipireddy, et al.
0

We use a conditional Karhunen-Loève (KL) model to quantify and reduce uncertainty in a stochastic partial differential equation (SPDE) problem with partially-known space-dependent coefficient, Y(x). We assume that a small number of Y(x) measurements are available and model Y(x) with a KL expansion. We achieve reduction in uncertainty by conditioning the KL expansion coefficients on measurements. We consider two approaches for conditioning the KL expansion: In Approach 1, we condition the KL model first and then truncate it. In Approach 2, we first truncate the KL expansion and then condition it. We employ the conditional KL expansion together with Monte Carlo and sparse grid collocation methods to compute the moments of the solution of the SPDE problem. Uncertainty of the problem is further reduced by adaptively selecting additional observation locations using two active learning methods. Method 1 minimizes the variance of the PDE coefficient, while Method 2 minimizes the variance of the solution of the PDE. We demonstrate that conditioning leads to dimension reduction of the KL representation of Y(x). For a linear diffusion SPDE with uncertain log-normal coefficient, we show that Approach 1 provides a more accurate approximation of the conditional log-normal coefficient and solution of the SPDE than Approach 2 for the same number of random dimensions in a conditional KL expansion. Furthermore, Approach 2 provides a good estimate for the number of terms of the truncated KL expansion of the conditional field of Approach 1. Finally, we demonstrate that active learning based on Method 2 is more efficient for uncertainty reduction in the SPDE's states (i.e., it leads to a larger reduction of the variance) than active learning using Method 2.

READ FULL TEXT

page 17

page 19

page 20

page 21

page 24

page 25

page 26

page 27

research
07/31/2019

Gaussian Process Regression and Conditional Polynomial Chaos for Parameter Estimation

We present a new approach for constructing a data-driven surrogate model...
research
08/22/2020

Structure exploiting methods for fast uncertainty quantification in multiphase flow through heterogeneous media

We present a computational framework for dimension reduction and surroga...
research
08/30/2021

Normalizing field flows: Solving forward and inverse stochastic differential equations using physics-informed flow models

We introduce in this work the normalizing field flows (NFF) for learning...
research
07/18/2021

Conditioning by Projection for the Sampling from Prior Gaussian Distributions

In this work we are interested in the (ill-posed) inverse problem for ab...
research
07/03/2018

Improving the approximation of the first and second order statistics of the response process to the random Legendre differential equation

In this paper, we deal with uncertainty quantification for the random Le...
research
06/11/2021

Polynomial propagation of moments in stochastic differential equations

We address the problem of approximating the moments of the solution, X(t...

Please sign up or login with your details

Forgot password? Click here to reset