Minimizing Finite Sums with the Stochastic Average Gradient

09/10/2013
by   Mark Schmidt, et al.
0

We propose the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method's iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from O(1/k^1/2) to O(1/k) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear O(1/k) to a linear convergence rate of the form O(p^k) for p 1. Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2017

Stochastic Strictly Contractive Peaceman-Rachford Splitting Method

In this paper, we propose a couple of new Stochastic Strictly Contractiv...
research
07/10/2014

Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems

Recent advances in optimization theory have shown that smooth strongly c...
research
08/15/2023

Quantile Optimization via Multiple Timescale Local Search for Black-box Functions

We consider quantile optimization of black-box functions that are estima...
research
03/22/2018

SUCAG: Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization

We propose and analyze a new stochastic gradient method, which we call S...
research
01/31/2020

Convergence rate analysis and improved iterations for numerical radius computation

We analyze existing methods for computing the numerical radius and intro...
research
02/26/2022

Faster One-Sample Stochastic Conditional Gradient Method for Composite Convex Minimization

We propose a stochastic conditional gradient method (CGM) for minimizing...
research
04/09/2022

Distributed Evolution Strategies for Black-box Stochastic Optimization

This work concerns the evolutionary approaches to distributed stochastic...

Please sign up or login with your details

Forgot password? Click here to reset