Faster Uncertainty Quantification for Inverse Problems with Conditional Normalizing Flows

by   Ali Siahkoohi, et al.
Georgia Institute of Technology

In inverse problems, we often have access to data consisting of paired samples (x,y)∼ p_X,Y(x,y) where y are partial observations of a physical system, and x represents the unknowns of the problem. Under these circumstances, we can employ supervised training to learn a solution x and its uncertainty from the observations y. We refer to this problem as the "supervised" case. However, the data y∼ p_Y(y) collected at one point could be distributed differently than observations y'∼ p_Y'(y'), relevant for a current set of problems. In the context of Bayesian inference, we propose a two-step scheme, which makes use of normalizing flows and joint data to train a conditional generator q_θ(x|y) to approximate the target posterior density p_X|Y(x|y). Additionally, this preliminary phase provides a density function q_θ(x|y), which can be recast as a prior for the "unsupervised" problem, e.g. when only the observations y'∼ p_Y'(y'), a likelihood model y'|x, and a prior on x' are known. We then train another invertible generator with output density q'_ϕ(x|y') specifically for y', allowing us to sample from the posterior p_X|Y'(x|y'). We present some synthetic results that demonstrate considerable training speedup when reusing the pretrained network q_θ(x|y') as a warm start or preconditioning for approximating p_X|Y'(x|y'), instead of learning from scratch. This training modality can be interpreted as an instance of transfer learning. This result is particularly relevant for large-scale inverse problems that employ expensive numerical simulations.


page 4

page 5

page 7

page 8


The efficacy and generalizability of conditional GANs for posterior inference in physics-based inverse problems

In this work, we train conditional Wasserstein generative adversarial ne...

Goal-oriented Uncertainty Quantification for Inverse Problems via Variational Encoder-Decoder Networks

In this work, we describe a new approach that uses variational encoder-d...

Proximal Residual Flows for Bayesian Inverse Problems

Normalizing flows are a powerful tool for generative modelling, density ...

Trumpets: Injective Flows for Inference and Inverse Problems

We propose injective generative models called Trumpets that generalize i...

Preconditioned training of normalizing flows for variational inference in inverse problems

Obtaining samples from the posterior distribution of inverse problems wi...

A Method to Model Conditional Distributions with Normalizing Flows

In this work, we investigate the use of normalizing flows to model condi...

Please sign up or login with your details

Forgot password? Click here to reset