
Optimal choice of k for knearest neighbor regression
The knearest neighbor algorithm (kNN) is a widely used nonparametric ...
read it

Learning to solve inverse problems using Wasserstein loss
We propose using the Wasserstein loss for training in inverse problems. ...
read it

SNIPS: Solving Noisy Inverse Problems Stochastically
In this work we introduce a novel stochastic algorithm dubbed SNIPS, whi...
read it

Learning the optimal regularizer for inverse problems
In this work, we consider the linear inverse problem y=Ax+ϵ, where A X→ ...
read it

Model Mismatch Tradeoffs in LMMSE Estimation
We consider a linear minimum mean squared error (LMMSE) estimation frame...
read it

The Relation Between Bayesian Fisher Information and Shannon Information for Detecting a Change in a Parameter
We derive a connection between performance of estimators the performance...
read it

Multitaper estimation on arbitrary domains
Multitaper estimators have enjoyed significant success in providing spec...
read it
How many samples are needed to reliably approximate the best linear estimator for a linear inverse problem?
The linear minimum mean squared error (LMMSE) estimator is the best linear estimator for a Bayesian linear inverse problem with respect to the mean squared error. It arises as the solution operator to a Tikhonovtype regularized inverse problem with a particular quadratic discrepancy term and a particular quadratic regularization operator. To be able to evaluate the LMMSE estimator, one must know the forward operator and the first two statistical moments of both the prior and the noise. If such knowledge is not available, one may approximate the LMMSE estimator based on given samples. In this work, it is investigated, in a finitedimensional setting, how many samples are needed to reliably approximate the LMMSE estimator, in the sense that, with high probability, the mean squared error of the approximation is smaller than a given multiple of the mean squared error of the LMMSE estimator.
READ FULL TEXT
Comments
There are no comments yet.