SpiderBoost: A Class of Faster Variance-reduced Algorithms for Nonconvex Optimization

10/25/2018
by   Zhe Wang, et al.
Duke University
The Ohio State University
9

There has been extensive research on developing stochastic variance reduced methods to solve large-scale optimization problems. More recently, a novel algorithm of such a type named SPIDER has been developed in Fang2018, which was shown to outperform existing algorithms of the same type and meet the lower bound in certain regimes. Though interesting in theory, SPIDER requires ϵ-level stepsize to guarantee the convergence, and consequently runs slow in practice. This paper proposes SpiderBoost as an improved SPIDER scheme, which comes with two major advantages compared to SPIDER. First, it allows much larger stepsize without sacrificing the convergence rate, and hence runs substantially faster than SPIDER in practice. Second, it extends much more easily to proximal algorithms with guaranteed convergence for solving composite optimization problems, which appears challenging for SPIDER due to stringent requirement on per-iteration increment to guarantee its convergence. Both advantages can be attributed to the new convergence analysis we develop for SpiderBoost that allows much more flexibility for choosing algorithm parameters. As further generalization of SpiderBoost, we show that proximal SpiderBoost achieves a stochastic first-order oracle (SFO) complexity of O({n^1/2ϵ^-1,ϵ^-3/2}) for composite optimization, which improves the existing best results by a factor of O({n^1/6,ϵ^-1/6}).

READ FULL TEXT

page 1

page 2

page 3

page 4

02/15/2019

ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization

In this paper, we propose a new stochastic algorithmic framework to solv...
02/16/2019

Faster Gradient-Free Proximal Stochastic Methods for Nonconvex Nonsmooth Optimization

Proximal gradient method has been playing an important role to solve man...
04/01/2022

A Semismooth Newton Stochastic Proximal Point Algorithm with Variance Reduction

We develop an implementable stochastic proximal point (SPP) method for a...
08/20/2020

An Optimal Hybrid Variance-Reduced Algorithm for Stochastic Composite Nonconvex Optimization

In this note we propose a new variant of the hybrid variance-reduced pro...
10/27/2019

Improved Zeroth-Order Variance Reduced Algorithms and Analysis for Nonconvex Optimization

Two types of zeroth-order stochastic algorithms have recently been desig...
06/21/2017

Improved Optimization of Finite Sums with Minibatch Stochastic Variance Reduced Proximal Iterations

We present novel minibatch stochastic optimization methods for empirical...
06/16/2020

Enhanced First and Zeroth Order Variance Reduced Algorithms for Min-Max Optimization

Min-max optimization captures many important machine learning problems s...

Code Repositories

Variance_Reduced_Optimizers_Pytorch

PyTorch Implementation of Variance Reduced Optimization Algorithms -- SARAH and SVRG.


view repo

Please sign up or login with your details

Forgot password? Click here to reset