Robust variance-regularized risk minimization with concomitant scaling

01/27/2023
by   Matthew J. Holland, et al.
0

Under losses which are potentially heavy-tailed, we consider the task of minimizing sums of the loss mean and standard deviation, without trying to accurately estimate the variance. By modifying a technique for variance-free robust mean estimation to fit our problem setting, we derive a simple learning procedure which can be easily combined with standard gradient-based solvers to be used in traditional machine learning workflows. Empirically, we verify that our proposed approach, despite its simplicity, performs as well or better than even the best-performing candidates derived from alternative criteria such as CVaR or DRO risks on a variety of datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2022

On Catoni's M-Estimation

Catoni proposed a robust M-estimator and gave the deviation inequality f...
research
09/07/2023

Empirical Risk Minimization for Losses without Variance

This paper considers an empirical risk minimization problem under heavy-...
research
06/03/2020

Learning with CVaR-based feedback under potentially heavy tails

We study learning algorithms that seek to minimize the conditional value...
research
06/02/2020

Improved scalability under heavy tails, without strong convexity

Real-world data is laden with outlying values. The challenge for machine...
research
01/15/2017

Regularization, sparse recovery, and median-of-means tournaments

A regularized risk minimization procedure for regression function estima...
research
05/11/2021

Spectral risk-based learning using unbounded losses

In this work, we consider the setting of learning problems under a wide ...
research
08/15/2019

Robust estimation of the mean with bounded relative standard deviation

Many randomized approximation algorithms operate by giving a procedure f...

Please sign up or login with your details

Forgot password? Click here to reset