Rethinking the Structure of Stochastic Gradients: Empirical and Statistical Evidence

12/05/2022
by   Zeke Xie, et al.
0

Stochastic gradients closely relate to both optimization and generalization of deep neural networks (DNNs). Some works attempted to explain the success of stochastic optimization for deep learning by the arguably heavy-tail properties of gradient noise, while other works presented theoretical and empirical evidence against the heavy-tail hypothesis on gradient noise. Unfortunately, formal statistical tests for analyzing the structure and heavy tails of stochastic gradients in deep learning are still under-explored. In this paper, we mainly make two contributions. First, we conduct formal statistical tests on the distribution of stochastic gradients and gradient noise across both parameters and iterations. Our statistical tests reveal that dimension-wise gradients usually exhibit power-law heavy tails, while iteration-wise gradients and stochastic gradient noise caused by minibatch training usually do not exhibit power-law heavy tails. Second, we further discover that the covariance spectra of stochastic gradients have the power-law structures in deep learning. While previous papers believed that the anisotropic structure of stochastic gradients matters to deep learning, they did not expect the gradient covariance can have such an elegant mathematical structure. Our work challenges the existing belief and provides novel insights on the structure of stochastic gradients in deep learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2019

Interplay Between Optimization and Generalization of Stochastic Gradient Descent with Covariance Noise

The choice of batch-size in a stochastic optimization algorithm plays a ...
research
12/06/2019

Why ADAM Beats SGD for Attention Models

While stochastic gradient descent (SGD) is still the de facto algorithm ...
research
08/25/2021

Heavy-tailed Streaming Statistical Estimation

We consider the task of heavy-tailed statistical estimation given stream...
research
04/06/2022

Nonlinear gradient mappings and stochastic optimization: A general framework with applications to heavy-tail noise

We introduce a general framework for nonlinear stochastic gradient desce...
research
02/24/2019

Rapidly Adapting Moment Estimation

Adaptive gradient methods such as Adam have been shown to be very effect...
research
02/10/2023

Achieving acceleration despite very noisy gradients

We present a novel momentum-based first order optimization method (AGNES...
research
08/02/2021

Generalization Properties of Stochastic Optimizers via Trajectory Analysis

Despite the ubiquitous use of stochastic optimization algorithms in mach...

Please sign up or login with your details

Forgot password? Click here to reset