Cluster-Seeking James-Stein Estimators

02/01/2016
by   K. Pavan Srinath, et al.
0

This paper considers the problem of estimating a high-dimensional vector of parameters θ∈R^n from a noisy observation. The noise vector is i.i.d. Gaussian with known variance. For a squared-error loss function, the James-Stein (JS) estimator is known to dominate the simple maximum-likelihood (ML) estimator when the dimension n exceeds two. The JS-estimator shrinks the observed vector towards the origin, and the risk reduction over the ML-estimator is greatest for θ that lie close to the origin. JS-estimators can be generalized to shrink the data towards any target subspace. Such estimators also dominate the ML-estimator, but the risk reduction is significant only when θ lies close to the subspace. This leads to the question: in the absence of prior information about θ, how do we design estimators that give significant risk reduction over the ML-estimator for a wide range of θ? In this paper, we propose shrinkage estimators that attempt to infer the structure of θ from the observed data in order to construct a good attracting subspace. In particular, the components of the observed vector are separated into clusters, and the elements in each cluster shrunk towards a common attractor. The number of clusters and the attractor for each cluster are determined from the observed vector. We provide concentration results for the squared-error loss and convergence results for the risk of the proposed estimators. The results show that the estimators give significant risk reduction over the ML-estimator for a wide range of θ, particularly for large n. Simulation results are provided to support the theoretical claims.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/28/2017

Empirical Bayes Estimators for High-Dimensional Sparse Vectors

The problem of estimating a high-dimensional sparse vector θ∈R^n from an...
research
12/25/2021

Bayesian Shrinkage Estimation for Stratified Count Data

In this paper, we consider the problem of simultaneously estimating Pois...
research
12/29/2022

Robust Bayesian Subspace Identification for Small Data Sets

Model estimates obtained from traditional subspace identification method...
research
09/14/2022

Weighted Distributed Estimation under Heterogeneity

This paper considers distributed M-estimation under heterogeneous distri...
research
02/07/2023

Estimating the scale parameters of several exponential distributions under order restriction

In the present work, we have investigated the problem of estimating para...
research
12/06/2019

Risk-Aware MMSE Estimation

Despite the simplicity and intuitive interpretation of Minimum Mean Squa...
research
05/22/2021

Denoising Noisy Neural Networks: A Bayesian Approach with Compensation

Noisy neural networks (NoisyNNs) refer to the inference and training of ...

Please sign up or login with your details

Forgot password? Click here to reset