Fast Variance Reduction Method with Stochastic Batch Size

08/07/2018
by   Xuanqing Liu, et al.
0

In this paper we study a family of variance reduction methods with randomized batch size---at each step, the algorithm first randomly chooses the batch size and then selects a batch of samples to conduct a variance-reduced stochastic update. We give the linear convergence rate for this framework for composite functions, and show that the optimal strategy to achieve the optimal convergence rate per data access is to always choose batch size of 1, which is equivalent to the SAGA algorithm. However, due to the presence of cache/disk IO effect in computer architecture, the number of data access cannot reflect the running time because of 1) random memory access is much slower than sequential access, 2) when data is too big to fit into memory, disk seeking takes even longer time. After taking these into account, choosing batch size of 1 is no longer optimal, so we propose a new algorithm called SAGA++ and show how to calculate the optimal average batch size theoretically. Our algorithm outperforms SAGA and other existing batched and stochastic solvers on real datasets. In addition, we also conduct a precise analysis to compare different update rules for variance reduction methods, showing that SAGA++ converges faster than SVRG in theory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2022

MSTGD:A Memory Stochastic sTratified Gradient Descent Method with an Exponential Convergence Rate

The fluctuation effect of gradient expectation and variance caused by pa...
research
03/29/2012

Corrected Kriging update formulae for batch-sequential data assimilation

Recently, a lot of effort has been paid to the efficient computation of ...
research
06/20/2020

Asymptotically Optimal Exact Minibatch Metropolis-Hastings

Metropolis-Hastings (MH) is a commonly-used MCMC algorithm, but it can b...
research
01/31/2019

Optimal mini-batch and step sizes for SAGA

Recently it has been shown that the step sizes of a family of variance r...
research
05/31/2020

Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization

Stochastic gradient methods (SGMs) have been extensively used for solvin...
research
06/24/2020

Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become a...
research
04/08/2022

Optimizing Coordinative Schedules for Tanker Terminals: An Intelligent Large Spatial-Temporal Data-Driven Approach – Part 1

In this study, a novel coordinative scheduling optimization approach is ...

Please sign up or login with your details

Forgot password? Click here to reset