Robust learning with anytime-guaranteed feedback

05/24/2021
by   Matthew J. Holland, et al.
0

Under data distributions which may be heavy-tailed, many stochastic gradient-based learning algorithms are driven by feedback queried at points with almost no performance guarantees on their own. Here we explore a modified "anytime online-to-batch" mechanism which for smooth objectives admits high-probability error bounds while requiring only lower-order moment bounds on the stochastic gradients. Using this conversion, we can derive a wide variety of "anytime robust" procedures, for which the task of performance analysis can be effectively reduced to regret control, meaning that existing regret bounds (for the bounded gradient case) can be robustified and leveraged in a straightforward manner. As a direct takeaway, we obtain an easily implemented stochastic gradient-based algorithm for which all queried points formally enjoy sub-Gaussian error bounds, and in practice show noteworthy gains on real-world data applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2020

Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping

In this paper, we propose a new accelerated stochastic first-order metho...
research
06/02/2020

Improved scalability under heavy tails, without strong convexity

Real-world data is laden with outlying values. The challenge for machine...
research
06/03/2020

Learning with CVaR-based feedback under potentially heavy tails

We study learning algorithms that seek to minimize the conditional value...
research
02/25/2021

No-Regret Reinforcement Learning with Heavy-Tailed Rewards

Reinforcement learning algorithms typically assume rewards to be sampled...
research
02/01/2021

Stochastic Online Convex Optimization; Application to probabilistic time series forecasting

Stochastic regret bounds for online algorithms are usually derived from ...
research
03/22/2023

Stochastic Nonsmooth Convex Optimization with Heavy-Tailed Noises

Recently, several studies consider the stochastic optimization problem b...
research
07/26/2021

Convergence in quadratic mean of averaged stochastic gradient algorithms without strong convexity nor bounded gradient

Online averaged stochastic gradient algorithms are more and more studied...

Please sign up or login with your details

Forgot password? Click here to reset