
Probabilistic Numerical Methods for PDEconstrained Bayesian Inverse Problems
This paper develops meshless methods for probabilistically describing di...
read it

Unsupervised Deep Learning Algorithm for PDEbased Forward and Inverse Problems
We propose a neural networkbased algorithm for solving forward and inve...
read it

Ghost Point Diffusion Maps for solving elliptic PDE's on Manifolds with Classical Boundary Conditions
In this paper, we extend the class of kernel methods, the socalled diff...
read it

Inverse Density as an Inverse Problem: The Fredholm Equation Approach
In this paper we address the problem of estimating the ratio q/p where p...
read it

Continuum Limit of Posteriors in Graph Bayesian Inverse Problems
We consider the problem of recovering a function input of a differential...
read it

Optimal experimental design under irreducible uncertainty for inverse problems governed by PDEs
We present a method for computing Aoptimal sensor placements for infini...
read it

On Bayesian Consistency for Flows Observed Through a Passive Scalar
We consider the statistical inverse problem of estimating a background f...
read it
Kernel Methods for Bayesian Elliptic Inverse Problems on Manifolds
This paper investigates the formulation and implementation of Bayesian inverse problems to learn input parameters of partial differential equations (PDEs) defined on manifolds. Specifically, we study the inverse problem of determining the diffusion coefficient of a secondorder elliptic PDE on a closed manifold from noisy measurements of the solution. Inspired by manifold learning techniques, we approximate the elliptic differential operator with a kernelbased integral operator that can be discretized via MonteCarlo without reference to the Riemannian metric. The resulting computational method is meshfree and easy to implement, and can be applied without full knowledge of the underlying manifold, provided that a point cloud of manifold samples is available. We adopt a Bayesian perspective to the inverse problem, and establish an upperbound on the total variation distance between the true posterior and an approximate posterior defined with the kernel forward map. Supporting numerical results show the effectiveness of the proposed methodology.
READ FULL TEXT
Comments
There are no comments yet.