Do we need to estimate the variance in robust mean estimation?

06/30/2021
by   Qiang Sun, et al.
0

This paper studies robust mean estimators for distributions with only finite variances. We propose a new loss function that is a function of the mean parameter and a robustification parameter. By simultaneously optimizing the empirical loss with respect to both parameters, we show that the resulting estimator for the robustification parameter can automatically adapt to the data and the unknown variance. Thus the resulting mean estimator can achieve near-optimal finite-sample performance. Compared with prior work, our method is computationally efficient and user-friendly. It does not need cross-validation to tune the robustification parameter.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2016

Variance-based regularization with convex objectives

We develop an approach to risk minimization and stochastic optimization ...
research
04/19/2019

Simultaneous Estimation of Poisson Parameters

This paper is devoted to the simultaneous estimation of the means of p≥ ...
research
12/30/2016

Adaptive Lambda Least-Squares Temporal Difference Learning

Temporal Difference learning or TD(λ) is a fundamental algorithm in the ...
research
06/04/2019

Robust Mean Estimation with the Bayesian Median of Means

The sample mean is often used to aggregate different unbiased estimates ...
research
06/02/2020

Robust and efficient mean estimation: approach based on the properties of self-normalized sums

Let X be a random variable with unknown mean and finite variance. We pre...
research
05/13/2022

A Huber loss-based super learner with applications to healthcare expenditures

Complex distributions of the healthcare expenditure pose challenges to s...

Please sign up or login with your details

Forgot password? Click here to reset