Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization

06/20/2020
by   Ahmed Khaled, et al.
30

We present a unified theorem for the convergence analysis of stochastic gradient algorithms for minimizing a smooth and convex loss plus a convex regularizer. We do this by extending the unified analysis of Gorbunov, Hanzely & Richtárik (2020) and dropping the requirement that the loss function be strongly convex. Instead, we only rely on convexity of the loss function. Our unified analysis applies to a host of existing algorithms such as proximal SGD, variance reduced methods, quantization and some coordinate descent type methods. For the variance reduced methods, we recover the best known convergence rates as special cases. For proximal SGD, the quantization and coordinate type methods, we uncover new state-of-the-art convergence rates. Our analysis also includes any form of sampling and minibatching. As such, we are able to determine the minibatch size that optimizes the total complexity of variance reduced methods. We showcase this by obtaining a simple formula for the optimal minibatch size of two variance reduced methods (L-SVRG and SAGA). This optimal minibatch size not only improves the theoretical total complexity of the methods but also improves their convergence in practice, as we show in several experiments.

READ FULL TEXT

page 4

page 5

page 7

page 9

page 18

page 21

page 22

page 24

research
05/27/2019

A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent

In this paper we introduce a unified analysis of a large family of varia...
research
02/13/2020

Sampling and Update Frequencies in Proximal Variance Reduced Stochastic Gradient Methods

Variance reduced stochastic gradient methods have gained popularity in r...
research
03/29/2023

Unified analysis of SGD-type methods

This note focuses on a simple approach to the unified analysis of SGD-ty...
research
05/27/2019

One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods

We propose a remarkably general variance-reduced method suitable for sol...
research
08/16/2016

Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition

In 1963, Polyak proposed a simple condition that is sufficient to show a...
research
06/06/2021

MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization

We propose a generic variance-reduced algorithm, which we call MUltiple ...
research
05/29/2019

Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data

Stochastic variance reduced methods have gained a lot of interest recent...

Please sign up or login with your details

Forgot password? Click here to reset