Lsh-sampling Breaks the Computation Chicken-and-egg Loop in Adaptive Stochastic Gradient Estimation

10/30/2019
by   Beidi Chen, et al.
14

Stochastic Gradient Descent or SGD is the most popular optimization algorithm for large-scale problems. SGD estimates the gradient by uniform sampling with sample size one. There have been several other works that suggest faster epoch-wise convergence by using weighted non-uniform sampling for better gradient estimates. Unfortunately, the per-iteration cost of maintaining this adaptive distribution for gradient estimation is more than calculating the full gradient itself, which we call the chicken-and-the-egg loop. As a result, the false impression of faster convergence in iterations, in reality, leads to slower convergence in time. In this paper, we break this barrier by providing the first demonstration of a scheme, Locality sensitive hashing (LSH) sampled Stochastic Gradient Descent (LGD), which leads to superior gradient estimation while keeping the sampling cost per iteration similar to that of the uniform sampling. Such an algorithm is possible due to the sampling view of LSH, which came to light recently. As a consequence of superior and fast estimation, we reduce the running time of all existing gradient descent algorithms, that relies on gradient estimates including Adam, Ada-grad, etc. We demonstrate the effectiveness of our proposal with experiments on linear models as well as the non-linear BERT, which is a recent popular deep learning based language representation model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2018

Optimal Adaptive and Accelerated Stochastic Gradient Descent

Stochastic gradient descent (Sgd) methods are the most powerful optimiza...
research
05/13/2014

Accelerating Minibatch Stochastic Gradient Descent using Stratified Sampling

Stochastic Gradient Descent (SGD) is a popular optimization method which...
research
12/22/2020

Stochastic Gradient Variance Reduction by Solving a Filtering Problem

Deep neural networks (DNN) are typically optimized using stochastic grad...
research
05/25/2021

SGD with Coordinate Sampling: Theory and Practice

While classical forms of stochastic gradient descent algorithm treat the...
research
06/20/2023

Sampling from Gaussian Process Posteriors using Stochastic Gradient Descent

Gaussian processes are a powerful framework for quantifying uncertainty ...
research
03/15/2017

A New Unbiased and Efficient Class of LSH-Based Samplers and Estimators for Partition Function Computation in Log-Linear Models

Log-linear models are arguably the most successful class of graphical mo...
research
06/15/2023

Stochastic Re-weighted Gradient Descent via Distributionally Robust Optimization

We develop a re-weighted gradient descent technique for boosting the per...

Please sign up or login with your details

Forgot password? Click here to reset