Nonlinear gradient mappings and stochastic optimization: A general framework with applications to heavy-tail noise

04/06/2022
by   Dusan Jakovetic, et al.
0

We introduce a general framework for nonlinear stochastic gradient descent (SGD) for the scenarios when gradient noise exhibits heavy tails. The proposed framework subsumes several popular nonlinearity choices, like clipped, normalized, signed or quantized gradient, but we also consider novel nonlinearity choices. We establish for the considered class of methods strong convergence guarantees assuming a strongly convex cost function with Lipschitz continuous gradients under very general assumptions on the gradient noise. Most notably, we show that, for a nonlinearity with bounded outputs and for the gradient noise that may not have finite moments of order greater than one, the nonlinear SGD's mean squared error (MSE), or equivalently, the expected cost function's optimality gap, converges to zero at rate O(1/t^ζ), ζ∈ (0,1). In contrast, for the same noise setting, the linear SGD generates a sequence with unbounded variances. Furthermore, for the nonlinearities that can be decoupled component wise, like, e.g., sign gradient or component-wise clipping, we show that the nonlinear SGD asymptotically (locally) achieves a O(1/t) rate in the weak convergence sense and explicitly quantify the corresponding asymptotic variance. Experiments show that, while our framework is more general than existing studies of SGD under heavy-tail noise, several easy-to-implement nonlinearities from our framework are competitive with state of the art alternatives on real data sets with heavy tail noises.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2023

Weighted Averaged Stochastic Gradient Descent: Asymptotic Normality and Optimality

Stochastic Gradient Descent (SGD) is one of the simplest and most popula...
research
12/06/2019

Why ADAM Beats SGD for Attention Models

While stochastic gradient descent (SGD) is still the de facto algorithm ...
research
12/22/2022

Nonlinear consensus+innovations under correlated heavy-tailed noises: Mean square convergence rate and asymptotics

We consider distributed recursive estimation of consensus+innovations ty...
research
10/25/2019

Bias-Variance Tradeoff in a Sliding Window Implementation of the Stochastic Gradient Algorithm

This paper provides a framework to analyze stochastic gradient algorithm...
research
12/05/2022

Rethinking the Structure of Stochastic Gradients: Empirical and Statistical Evidence

Stochastic gradients closely relate to both optimization and generalizat...
research
11/02/2022

Large deviations rates for stochastic gradient descent with strongly convex functions

Recent works have shown that high probability metrics with stochastic gr...
research
04/13/2023

Multi-kernel Correntropy-based Orientation Estimation of IMUs: Gradient Descent Methods

This paper presents two computationally efficient algorithms for the ori...

Please sign up or login with your details

Forgot password? Click here to reset