Variance Reduction with Sparse Gradients

01/27/2020
by   Melih Elibol, et al.
13

Variance reduction methods such as SVRG and SpiderBoost use a mixture of large and small batch gradients to reduce the variance of stochastic gradients. Compared to SGD, these methods require at least double the number of operations per update to model parameters. To reduce the computational cost of these methods, we introduce a new sparsity operator: The random-top-k operator. Our operator reduces computational complexity by estimating gradient sparsity exhibited in a variety of applications by combining the top-k operator and the randomized coordinate descent operator. With this operator, large batch gradients offer an extra benefit beyond variance reduction: A reliable estimate of gradient sparsity. Theoretically, our algorithm is at least as good as the best algorithm (SpiderBoost), and further excels in performance whenever the random-top-k operator captures gradient sparsity. Empirically, our algorithm consistently outperforms SpiderBoost using various models on various tasks including image classification, natural language processing, and sparse matrix factorization. We also provide empirical evidence to support the intuition behind our algorithm via a simple gradient entropy computation, which serves to quantify gradient sparsity at every iteration.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2020

Variance reduction for Langevin Monte Carlo in high dimensional sampling problems

Sampling from a log-concave distribution function is one core problem th...
research
04/23/2020

Variance reduction for distributed stochastic gradient MCMC

Stochastic gradient MCMC methods, such as stochastic gradient Langevin d...
research
11/26/2021

Random-reshuffled SARAH does not need a full gradient computations

The StochAstic Recursive grAdient algoritHm (SARAH) algorithm is a varia...
research
07/26/2020

Langevin Monte Carlo: random coordinate descent and variance reduction

Sampling from a log-concave distribution function on ℝ^d (with d≫ 1) is ...
research
05/27/2019

A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent

In this paper we introduce a unified analysis of a large family of varia...
research
09/02/2022

Revisiting Outer Optimization in Adversarial Training

Despite the fundamental distinction between adversarial and natural trai...
research
01/31/2022

Memory-Efficient Backpropagation through Large Linear Layers

In modern neural networks like Transformers, linear layers require signi...

Please sign up or login with your details

Forgot password? Click here to reset