SGD without Replacement: Sharper Rates for General Smooth Convex Functions

03/04/2019
by   Prateek Jain, et al.
12

We study stochastic gradient descent without replacement () for smooth convex functions. is widely observed to converge faster than true where each sample is drawn independently with replacement bottou2009curiously and hence, is more popular in practice. But it's convergence properties are not well understood as sampling without replacement leads to coupling between iterates and gradients. By using method of exchangeable pairs to bound Wasserstein distance, we provide the first non-asymptotic results for when applied to general smooth, strongly-convex functions. In particular, we show that converges at a rate of O(1/K^2) while is known to converge at O(1/K) rate, where K denotes the number of passes over data and is required to be large enough. Existing results for in this setting require additional Hessian Lipschitz assumption gurbuzbalaban2015random,haochen2018random. For small K, we show can achieve same convergence rate as for general smooth strongly-convex functions. Existing results in this setting require K=1 and hold only for generalized linear models shamir2016without. In addition, by careful analysis of the coupling, for both large and small K, we obtain better dependence on problem dependent parameters like condition number.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2020

Closing the convergence gap of SGD without replacement

Stochastic gradient descent without replacement sampling is widely used ...
research
06/10/2020

Random Reshuffling: Simple Analysis with Vast Improvements

Random Reshuffling (RR) is an algorithm for minimizing finite-sum functi...
research
02/19/2021

Permutation-Based SGD: Is Random Optimal?

A recent line of ground-breaking results for permutation-based SGD has c...
research
06/28/2023

Ordering for Non-Replacement SGD

One approach for reducing run time and improving efficiency of machine l...
research
06/07/2022

Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax Optimization

We analyze the convergence rates of stochastic gradient algorithms for s...
research
08/19/2019

Variance of finite difference methods for reaction networks with non-Lipschitz rate functions

Parametric sensitivity analysis is a critical component in the study of ...
research
06/26/2018

Random Shuffling Beats SGD after Finite Epochs

A long-standing problem in the theory of stochastic gradient descent (SG...

Please sign up or login with your details

Forgot password? Click here to reset