Bayesian neural network priors for edge-preserving inversion

by   Chen Li, et al.

We consider Bayesian inverse problems wherein the unknown state is assumed to be a function with discontinuous structure a priori. A class of prior distributions based on the output of neural networks with heavy-tailed weights is introduced, motivated by existing results concerning the infinite-width limit of such networks. We show theoretically that samples from such priors have desirable discontinuous-like properties even when the network width is finite, making them appropriate for edge-preserving inversion. Numerically we consider deconvolution problems defined on one- and two-dimensional spatial domains to illustrate the effectiveness of these priors; MAP estimation, dimension-robust MCMC sampling and ensemble-based approximations are utilized to probe the posterior distribution. The accuracy of point estimates is shown to exceed those obtained from non-heavy tailed priors, and uncertainty estimates are shown to provide more useful qualitative information.



There are no comments yet.


page 2

page 19

page 20

page 21


Exact priors of finite neural networks

Bayesian neural networks are theoretically well-understood only in the i...

Dimension-robust Function Space MCMC With Neural Network Priors

This paper introduces a new prior on functions spaces which scales more ...

Robust MCMC Sampling with Non-Gaussian and Hierarchical Priors in High Dimensions

A key problem in inference for high dimensional unknowns is the design o...

Random tree Besov priors – Towards fractal imaging

We propose alternatives to Bayesian a priori distributions that are freq...

Bayesian Neural Network Priors Revisited

Isotropic Gaussian priors are the de facto standard for modern Bayesian ...

Hyperparameter Estimation in Bayesian MAP Estimation: Parameterizations and Consistency

The Bayesian formulation of inverse problems is attractive for three pri...

Stability of Gibbs Posteriors from the Wasserstein Loss for Bayesian Full Waveform Inversion

Recently, the Wasserstein loss function has been proven to be effective ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.