Optimal mini-batch and step sizes for SAGA

01/31/2019
by   Nidham Gazagnadou, et al.
22

Recently it has been shown that the step sizes of a family of variance reduced gradient methods called the JacSketch methods depend on the expected smoothness constant. In particular, if this expected smoothness constant could be calculated a priori, then one could safely set much larger step sizes which would result in a much faster convergence rate. We fill in this gap, and provide simple closed form expressions for the expected smoothness constant and careful numerical experiments verifying these bounds. Using these bounds, and since the SAGA algorithm is part of this JacSketch family, we suggest a new standard practice for setting the step sizes and mini-batch size for SAGA that are competitive with a numerical grid search. Furthermore, we can now show that the total complexity of the SAGA algorithm decreases linearly in the mini-batch size up to a pre-defined value: the optimal mini-batch size. This is a rare result in the stochastic variance reduced literature, only previously shown for the Katyusha algorithm. Finally we conjecture that this is the case for many other stochastic variance reduced methods and that our bounds and analysis of the expected smoothness constant is key to extending these results.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/27/2019

SGD: General Analysis and Improved Rates

We propose a general yet simple theorem describing the convergence of SG...
05/28/2016

Optimal Rates for Multi-pass Stochastic Gradient Methods

We analyze the learning properties of the stochastic gradient method whe...
02/08/2018

Mini-Batch Stochastic ADMMs for Nonconvex Nonsmooth Optimization

In the paper, we study the mini-batch stochastic ADMMs (alternating dire...
06/27/2020

Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax Problems

We develop a novel variance-reduced algorithm to solve a stochastic nonc...
08/07/2018

Fast Variance Reduction Method with Stochastic Batch Size

In this paper we study a family of variance reduction methods with rando...
10/21/2017

Optimal Rates for Learning with Nyström Stochastic Gradient Methods

In the setting of nonparametric regression, we propose and study a combi...
07/17/2018

Learning with SGD and Random Features

Sketching and stochastic gradient methods are arguably the most common t...

Code Repositories

glm_saga

Minimal, standalone library for solving GLMs in PyTorch


view repo

Please sign up or login with your details

Forgot password? Click here to reset