DeepAI AI Chat
Log In Sign Up

Do we need to estimate the variance in robust mean estimation?

by   Qiang Sun, et al.

This paper studies robust mean estimators for distributions with only finite variances. We propose a new loss function that is a function of the mean parameter and a robustification parameter. By simultaneously optimizing the empirical loss with respect to both parameters, we show that the resulting estimator for the robustification parameter can automatically adapt to the data and the unknown variance. Thus the resulting mean estimator can achieve near-optimal finite-sample performance. Compared with prior work, our method is computationally efficient and user-friendly. It does not need cross-validation to tune the robustification parameter.


page 1

page 2

page 3

page 4


Variance-based regularization with convex objectives

We develop an approach to risk minimization and stochastic optimization ...

Simultaneous Estimation of Poisson Parameters

This paper is devoted to the simultaneous estimation of the means of p≥ ...

Adaptive Lambda Least-Squares Temporal Difference Learning

Temporal Difference learning or TD(λ) is a fundamental algorithm in the ...

Robust Mean Estimation with the Bayesian Median of Means

The sample mean is often used to aggregate different unbiased estimates ...

Robust and efficient mean estimation: approach based on the properties of self-normalized sums

Let X be a random variable with unknown mean and finite variance. We pre...

A Huber loss-based super learner with applications to healthcare expenditures

Complex distributions of the healthcare expenditure pose challenges to s...