How many samples are needed to reliably approximate the best linear estimator for a linear inverse problem?

07/01/2021 ∙ by Gernot Holler, et al. ∙ 0

The linear minimum mean squared error (LMMSE) estimator is the best linear estimator for a Bayesian linear inverse problem with respect to the mean squared error. It arises as the solution operator to a Tikhonov-type regularized inverse problem with a particular quadratic discrepancy term and a particular quadratic regularization operator. To be able to evaluate the LMMSE estimator, one must know the forward operator and the first two statistical moments of both the prior and the noise. If such knowledge is not available, one may approximate the LMMSE estimator based on given samples. In this work, it is investigated, in a finite-dimensional setting, how many samples are needed to reliably approximate the LMMSE estimator, in the sense that, with high probability, the mean squared error of the approximation is smaller than a given multiple of the mean squared error of the LMMSE estimator.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 24

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.