Linearly convergent stochastic heavy ball method for minimizing generalization error

10/30/2017
by   Nicolas Loizou, et al.
0

In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/18/2021

Training Deep Neural Networks with Adaptive Momentum Inspired by the Quadratic Optimization

Heavy ball momentum is crucial in accelerating (stochastic) gradient-bas...
research
11/05/2018

Non-ergodic Convergence Analysis of Heavy-Ball Algorithms

In this paper, we revisit the convergence of the Heavy-ball method, and ...
research
11/11/2022

Online Signal Recovery via Heavy Ball Kaczmarz

Recovering a signal x^∗∈ℝ^n from a sequence of linear measurements is an...
research
07/31/2023

Fast stochastic dual coordinate descent algorithms for linearly constrained convex optimization

The problem of finding a solution to the linear system Ax = b with certa...
research
04/27/2021

Discriminative Bayesian Filtering Lends Momentum to the Stochastic Newton Method for Minimizing Log-Convex Functions

To minimize the average of a set of log-convex functions, the stochastic...
research
12/28/2018

Generalization of the Ball-Collision Algorithm

In this paper we generalize the Ball-Collision Algorithm by Bernstein, L...
research
07/20/2021

Monocular Visual Analysis for Electronic Line Calling of Tennis Games

Electronic Line Calling is an auxiliary referee system used for tennis m...

Please sign up or login with your details

Forgot password? Click here to reset