Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm

10/21/2013
by   Deanna Needell, et al.
0

We obtain an improved finite-sample guarantee on the linear convergence of stochastic gradient descent for smooth and strongly convex objectives, improving from a quadratic dependence on the conditioning (L/μ)^2 (where L is a bound on the smoothness and μ on the strong convexity) to a linear dependence on L/μ. Furthermore, we show how reweighting the sampling distribution (i.e. importance sampling) is necessary in order to further improve convergence, and obtain a linear dependence in the average smoothness, dominating previous results. We also discuss importance sampling for SGD more broadly and show how it can improve convergence also in other scenarios. Our results are based on a connection we make between SGD and the randomized Kaczmarz algorithm, which allows us to transfer ideas between the separate bodies of literature studying each of the two methods. In particular, we recast the randomized Kaczmarz algorithm as an instance of SGD, and apply our results to prove its exponential convergence, but to the solution of a weighted least squares problem rather than the original least squares problem. We then present a modified Kaczmarz algorithm with partially biased sampling which does converge to the original least squares solution with the same exponential convergence rate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/09/2019

Unified Optimal Analysis of the (Stochastic) Gradient Method

In this note we give a simple proof for the convergence of stochastic gr...
research
01/13/2014

Stochastic Optimization with Importance Sampling

Uniform sampling of training data has been commonly used in traditional ...
research
03/06/2021

Block-Randomized Gradient Descent Methods with Importance Sampling for CP Tensor Decomposition

This work considers the problem of computing the CANDECOMP/PARAFAC (CP) ...
research
02/12/2015

Weighted SGD for ℓ_p Regression with Randomized Preconditioning

In recent years, stochastic gradient descent (SGD) methods and randomize...
research
03/23/2021

Stochastic Reweighted Gradient Descent

Despite the strong theoretical guarantees that variance-reduced finite-s...
research
01/16/2013

Adaptive Importance Sampling for Estimation in Structured Domains

Sampling is an important tool for estimating large, complex sums and int...
research
06/26/2018

Random Shuffling Beats SGD after Finite Epochs

A long-standing problem in the theory of stochastic gradient descent (SG...

Please sign up or login with your details

Forgot password? Click here to reset