Resolution-independent generative models based on operator learning for physics-constrained Bayesian inverse problems
The Bayesian inference approach is widely used to tackle inverse problems due to its versatile and natural ability to handle ill-posedness. However, it often faces challenges when dealing with situations involving continuous fields or large-resolution discrete representations (high-dimensional). Moreover, the prior distribution of unknown parameters is commonly difficult to be determined. In this study, an Operator Learning-based Generative Adversarial Network (OL-GAN) is proposed and integrated into the Bayesian inference framework to handle these issues. Unlike most Bayesian approaches, the distinctive characteristic of the proposed method is to learn the joint distribution of parameters and responses. By leveraging the trained generative model, the posteriors of the unknown parameters can theoretically be approximated by any sampling algorithm (e.g., Markov Chain Monte Carlo, MCMC) in a low-dimensional latent space shared by the components of the joint distribution. The latent space is typically a simple and easy-to-sample distribution (e.g., Gaussian, uniform), which significantly reduces the computational cost associated with the Bayesian inference while avoiding prior selection concerns. Furthermore, incorporating operator learning enables resolution-independent in the generator. Predictions can be obtained at desired coordinates, and inversions can be performed even if the observation data are misaligned with the training data. Finally, the effectiveness of the proposed method is validated through several numerical experiments.
READ FULL TEXT