Stochastic Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization

06/18/2020
by   Chaobing Song, et al.
0

In this paper, we introduce a simplified and unified method for finite-sum convex optimization, named Stochastic Variance Reduction via Accelerated Dual Averaging (SVR-ADA). In the nonstrongly convex and smooth setting, SVR-ADA can attain an O(1/n)-accurate solution in nloglog n number of stochastic gradient evaluations, where n is the number of samples; meanwhile, SVR-ADA matches the lower bound of this setting up to a loglog n factor. In the strongly convex and smooth setting, SVR-ADA matches the lower bound in the regime n< O(κ) while it improves the rate in the regime n≫κ to O(nloglog n +nlog(1/(nϵ))/log(n/κ)), where κ is the condition number. SVR-ADA improves complexity of the best known methods without use of any additional strategy such as optimal black-box reduction, and it leads to a unified convergence analysis and simplified algorithm for both the nonstrongly convex and strongly convex settings. Through experiments on real datasets, we also show the superior performance of SVR-ADA over existing methods for large-scale machine learning problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2015

Accelerated Stochastic Gradient Descent for Minimizing Finite Sums

We propose an optimization method for minimizing the finite sums of smoo...
research
02/26/2021

Variance Reduction via Primal-Dual Accelerated Dual Averaging for Nonsmooth Convex Finite-Sums

We study structured nonsmooth convex finite-sum optimization that appear...
research
03/21/2021

ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method

We propose a novel accelerated variance-reduced gradient method called A...
research
04/19/2021

Random Reshuffling with Variance Reduction: New Analysis and Better Rates

Virtually all state-of-the-art methods for training supervised machine l...
research
05/20/2014

Convex Optimization: Algorithms and Complexity

This monograph presents the main complexity theorems in convex optimizat...
research
03/29/2021

The Complexity of Nonconvex-Strongly-Concave Minimax Optimization

This paper studies the complexity for finding approximate stationary poi...
research
06/06/2017

Limitations on Variance-Reduction and Acceleration Schemes for Finite Sum Optimization

We study the conditions under which one is able to efficiently apply var...

Please sign up or login with your details

Forgot password? Click here to reset