Barankin Vector Locally Best Unbiased Estimates

06/30/2017
by   Bruno Cernuschi-Frias, et al.
0

The Barankin bound is generalized to the vector case in the mean square error sense. Necessary and sufficient conditions are obtained to achieve the lower bound. To obtain the result, a simple finite dimensional real vector valued generalization of the Riesz representation theorem for Hilbert spaces is given. The bound has the form of a linear matrix inequality where the covariances of any unbiased estimator, if these exist, are lower bounded by matrices depending only on the parametrized probability distributions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/14/2023

A Bilateral Bound on the Mean-Square Error for Estimation in Model Mismatch

A bilateral (i.e., upper and lower) bound on the mean-square error under...
research
03/28/2019

An Improved Lower Bound for Sparse Reconstruction from Subsampled Hadamard Matrices

We give a short argument that yields a new lower bound on the number of ...
research
01/15/2018

Information Geometric Approach to Bayesian Lower Error Bounds

Information geometry describes a framework where probability densities c...
research
02/11/2020

Generalized Bayesian Cramér-Rao Inequality via Information Geometry of Relative α-Entropy

The relative α-entropy is the Rényi analog of relative entropy and arise...
research
12/29/2022

What Estimators Are Unbiased For Linear Models?

The recent thought-provoking paper by Hansen [2022, Econometrica] proved...
research
05/01/2014

A Structural Approach to Coordinate-Free Statistics

We consider the question of learning in general topological vector space...
research
05/05/2020

Bias-Variance Tradeoffs in Joint Spectral Embeddings

Latent position models and their corresponding estimation procedures off...

Please sign up or login with your details

Forgot password? Click here to reset