Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems

07/10/2014
by   Aaron J. Defazio, et al.
0

Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box "batch" problem. In this work we introduce a new method in this class with a theoretical convergence rate four times faster than existing methods, for sums with sufficiently many terms. This method is also amendable to a sampling without replacement scheme that in practice gives further speed-ups. We give empirical results showing state of the art performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2013

Minimizing Finite Sums with the Stochastic Average Gradient

We propose the stochastic average gradient (SAG) method for optimizing t...
research
01/01/2021

On a Faster R-Linear Convergence Rate of the Barzilai-Borwein Method

The Barzilai-Borwein (BB) method has demonstrated great empirical succes...
research
06/07/2022

Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax Optimization

We analyze the convergence rates of stochastic gradient algorithms for s...
research
07/01/2014

SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives

In this work we introduce a new optimisation method called SAGA in the s...
research
07/15/2020

Incremental Without Replacement Sampling in Nonconvex Optimization

Minibatch decomposition methods for empirical risk minimization are comm...
research
02/08/2016

A Simple Practical Accelerated Method for Finite Sums

We describe a novel optimization method for finite sums (such as empiric...

Please sign up or login with your details

Forgot password? Click here to reset