How many samples are needed to reliably approximate the best linear estimator for a linear inverse problem?

07/01/2021
by   Gernot Holler, et al.
0

The linear minimum mean squared error (LMMSE) estimator is the best linear estimator for a Bayesian linear inverse problem with respect to the mean squared error. It arises as the solution operator to a Tikhonov-type regularized inverse problem with a particular quadratic discrepancy term and a particular quadratic regularization operator. To be able to evaluate the LMMSE estimator, one must know the forward operator and the first two statistical moments of both the prior and the noise. If such knowledge is not available, one may approximate the LMMSE estimator based on given samples. In this work, it is investigated, in a finite-dimensional setting, how many samples are needed to reliably approximate the LMMSE estimator, in the sense that, with high probability, the mean squared error of the approximation is smaller than a given multiple of the mean squared error of the LMMSE estimator.

READ FULL TEXT
research
09/12/2019

Optimal choice of k for k-nearest neighbor regression

The k-nearest neighbor algorithm (k-NN) is a widely used non-parametric ...
research
10/30/2017

Learning to solve inverse problems using Wasserstein loss

We propose using the Wasserstein loss for training in inverse problems. ...
research
08/02/2023

Towards optimal sensor placement for inverse problems in spaces of measures

This paper studies the identification of a linear combination of point s...
research
03/27/2023

Adjusted Wasserstein Distributionally Robust Estimator in Statistical Learning

We propose an adjusted Wasserstein distributionally robust estimator – b...
research
06/11/2021

Learning the optimal regularizer for inverse problems

In this work, we consider the linear inverse problem y=Ax+ϵ, where A X→ ...
research
05/25/2021

Model Mismatch Trade-offs in LMMSE Estimation

We consider a linear minimum mean squared error (LMMSE) estimation frame...
research
01/31/2019

The Relation Between Bayesian Fisher Information and Shannon Information for Detecting a Change in a Parameter

We derive a connection between performance of estimators the performance...

Please sign up or login with your details

Forgot password? Click here to reset