DeepAI AI Chat
Log In Sign Up

Some Results on Tighter Bayesian Lower Bounds on the Mean-Square Error

by   Lucien Bacharach, et al.

In random parameter estimation, Bayesian lower bounds (BLBs) for the mean-square error have been noticed to not be tight in a number of cases, even when the sample size, or the signal-to-noise ratio, grow to infinity. In this paper, we study alternative forms of BLBs obtained from a covariance inequality, where the inner product is based on the a posteriori instead of the joint probability density function. We hence obtain a family of BLBs, which is shown to form a counterpart at least as tight as the well-known Weiss-Weinstein family of BLBs, and we extend it to the general case of vector parameter estimation. Conditions for equality between these two families are provided. Focusing on the Bayesian Cramér-Rao bound (BCRB), a definition of efficiency is proposed relatively to its tighter form, and efficient estimators are described for various types of common estimation problems, e.g., scalar, exponential family model parameter estimation. Finally, an example is provided, for which the classical BCRB is known to not be tight, while we show its tighter form is, based on formal proofs of asymptotic efficiency of Bayesian estimators. This analysis is finally corroborated by numerical results.


Lower Bounds for the Minimum Mean-Square Error via Neural Network-based Estimation

The minimum mean-square error (MMSE) achievable by optimal estimation of...

Channel Estimation for Intelligent Reflecting Surface Assisted Wireless Communications

In this paper, the minimum mean square error (MMSE) channel estimation f...

Guitar Effects Recognition and Parameter Estimation with Convolutional Neural Networks

Despite the popularity of guitar effects, there is very little existing ...

Information Geometric Approach to Bayesian Lower Error Bounds

Information geometry describes a framework where probability densities c...