Convergence Rates for Learning Linear Operators from Noisy Data

08/27/2021
by   Maarten V. de Hoop, et al.
5

We study the Bayesian inverse problem of learning a linear operator on a Hilbert space from its noisy pointwise evaluations on random input data. Our framework assumes that this target operator is self-adjoint and diagonal in a basis shared with the Gaussian prior and noise covariance operators arising from the imposed statistical model and is able to handle target operators that are compact, bounded, or even unbounded. We establish posterior contraction rates with respect to a family of Bochner norms as the number of data tend to infinity and derive related lower bounds on the estimation error. In the large data limit, we also provide asymptotic convergence rates of suitably defined excess risk and generalization gap functionals associated with the posterior mean point estimator. In doing so, we connect the posterior consistency results to nonparametric learning theory. Furthermore, these convergence rates highlight and quantify the difficulty of learning unbounded linear operators in comparison with the learning of bounded or compact ones. Numerical experiments confirm the theory and demonstrate that similar conclusions may be expected in more general problem settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2019

Bayesian Inverse Problems with Heterogeneous Variance

We consider inverse problems in Hilbert spaces contaminated by Gaussian ...
research
11/27/2020

Equivalence of Convergence Rates of Posterior Distributions and Bayes Estimators for Functions and Nonparametric Functionals

We study the posterior contraction rates of a Bayesian method with Gauss...
research
08/27/2019

Convergence of the conjugate gradient method with unbounded operators

In the framework of inverse linear problems on infinite-dimensional Hilb...
research
06/29/2018

A Learning Theory in Linear Systems under Compositional Models

We present a learning theory for the training of a linear system operato...
research
11/22/2022

Least squares approximations in linear statistical inverse learning problems

Statistical inverse learning aims at recovering an unknown function f fr...
research
05/01/2020

Posterior Convergence of Nonparametric Binary and Poisson Regression Under Possible Misspecifications

In this article, we investigate posterior convergence of nonparametric b...
research
06/07/2021

Parameter-free Statistically Consistent Interpolation: Dimension-independent Convergence Rates for Hilbert kernel regression

Previously, statistical textbook wisdom has held that interpolating nois...

Please sign up or login with your details

Forgot password? Click here to reset