Residual-based error correction for neural operator accelerated infinite-dimensional Bayesian inverse problems

by   Lianghao Cao, et al.

We explore using neural operators, or neural network representations of nonlinear maps between function spaces, to accelerate infinite-dimensional Bayesian inverse problems (BIPs) with models governed by nonlinear parametric partial differential equations (PDEs). Neural operators have gained significant attention in recent years for their ability to approximate the parameter-to-solution maps defined by PDEs using as training data solutions of PDEs at a limited number of parameter samples. The computational cost of BIPs can be drastically reduced if the large number of PDE solves required for posterior characterization are replaced with evaluations of trained neural operators. However, reducing error in the resulting BIP solutions via reducing the approximation error of the neural operators in training can be challenging and unreliable. We provide an a priori error bound result that implies certain BIPs can be ill-conditioned to the approximation error of neural operators, thus leading to inaccessible accuracy requirements in training. To reliably deploy neural operators in BIPs, we consider a strategy for enhancing the performance of neural operators, which is to correct the prediction of a trained neural operator by solving a linear variational problem based on the PDE residual. We show that a trained neural operator with error correction can achieve a quadratic reduction of its approximation error, all while retaining substantial computational speedups of posterior sampling when models are governed by highly nonlinear PDEs. The strategy is applied to two numerical examples of BIPs based on a nonlinear reaction–diffusion problem and deformation of hyperelastic materials. We demonstrate that posterior representations of the two BIPs produced using trained neural operators are greatly and consistently enhanced by error correction.


page 19

page 21

page 22

page 24

page 26

page 27


Neural Control of Parametric Solutions for High-dimensional Evolution PDEs

We develop a novel computational framework to approximate solution opera...

Optimal design of large-scale nonlinear Bayesian inverse problems under model uncertainty

We consider optimal experimental design (OED) for Bayesian nonlinear inv...

Corrector Operator to Enhance Accuracy and Reliability of Neural Operator Surrogates of Nonlinear Variational Boundary-Value Problems

This work focuses on developing methods for approximating the solution o...

Variational Bayes' method for functions with applications to some inverse problems

Bayesian approach as a useful tool for quantifying uncertainties has bee...

Semi-supervised Invertible DeepONets for Bayesian Inverse Problems

Deep Operator Networks (DeepONets) offer a powerful, data-driven tool fo...

Neural Operators of Backstepping Controller and Observer Gain Functions for Reaction-Diffusion PDEs

Unlike ODEs, whose models involve system matrices and whose controllers ...

The Cost-Accuracy Trade-Off In Operator Learning With Neural Networks

The term `surrogate modeling' in computational science and engineering r...

Please sign up or login with your details

Forgot password? Click here to reset