Inverse learning in Hilbert scales

02/24/2020
by   Abhishake Rastogi, et al.
0

We study the linear ill-posed inverse problem with noisy data in the statistical learning setting. Approximate reconstructions from random noisy data are sought with general regularization schemes in Hilbert scale. We discuss the rates of convergence for the regularized solution under the prior assumptions and a certain link condition. We express the error in terms of certain distance functions. For regression functions with smoothness given in terms of source conditions the error bound can then be explicitly established.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2020

Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems

In this paper, we consider the nonlinear ill-posed inverse problem with ...
research
04/16/2022

PAC-Bayesian Based Adaptation for Regularized Learning

In this paper, we propose a PAC-Bayesian a posteriori parameter selectio...
research
08/28/2022

Statistical Inverse Problems in Hilbert Scales

In this paper, we study the Tikhonov regularization scheme in Hilbert sc...
research
04/14/2020

Error analysis for filtered back projection reconstructions in Besov spaces

Filtered back projection (FBP) methods are the most widely used reconstr...
research
12/21/2020

Nonlinear Tikhonov regularization in Hilbert scales with oversmoothing penalty: inspecting balancing principles

The analysis of Tikhonov regularization for nonlinear ill-posed equation...
research
03/05/2021

Increasing the relative smoothness of stochastically sampled data

We consider a linear ill-posed equation in the Hilbert space setting. Mu...
research
02/18/2021

Convex regularization in statistical inverse learning problems

We consider a statistical inverse learning problem, where the task is to...

Please sign up or login with your details

Forgot password? Click here to reset